Understanding the Meltdown Attack

This month, security researchers released a whitepaper describing the Meltdown attack, which allows anyone to read the full physical memory of a system by exploiting a vulnerability in Intel processors. If that sounds bad, that’s because it is. It means that if you’re running workloads on a public cloud provider, and you don’t have a dedicated server, an attacker can read what your workloads are putting into memory. This includes passwords, private keys, credit card numbers, your cat’s middle name, etc.

How Meltdown works

As a way of eking out every ounce of speed, modern processors perform out-of-order execution. Rather than executing a program one instruction at a time, the processor fetches multiple instructions at once and places them in an instruction queue. It then executes each instruction as soon as possible (and usually out-of-order) and stores each instruction’s output (if there is any) in a cache.

Now here’s the kicker. An instruction can do nasty things. Out-of-order execution runs instructions that the program isn’t allowed to run. Take the following assembly instruction:

; rcx = kernel address
 mov al, byte [rcx]

This copies one byte of data from the kernel memory and places it in a temporary portion of CPU memory, called a register. With in-order processing, the CPU would not allow this instruction to execute.

But with out-of-order execution, the CPU does execute the instruction, and it stores the resulting byte value in a temporary cache. The CPU then raises an exception and terminates the program.

But the byte value is still stored in the cache. This is where the flaw in Intel’s microcode lies. The CPU should clear the cache as soon as the exception is raised. But it doesn’t. It leaves it there, vulnerable to another type of side-channel attack called a cache attack.

Cache attacks have been around for years, and there are several different types which an hacker can use in conjunction with Meltdown to figure out the data stored in cache. For a great explanation of how these attacks work, check out Bert Hubert’s article on Spectre and Meltdown. It’s trivial for an attacker to pull one off, quickly.

Only Intel Processors were Shown to be Vulnerable

The researchers were not able to duplicate the results on AMD or ARM processors, which also use out-of-order execution. They speculate that they theoretically could, but they were not able to at the time they released the paper. You certainly shouldn’t assume that all non-Intel processors are immune to Meltdown.

Disabling out-of-order execution isn’t the answer

In order to be effective, the Meltdown attack depends on out-of-order execution as a necessary but not sufficient condition. Thus, simply having out-of-order execution enabled is not enough to make Meltdown possible. Disabling out-of-order execution does prevent Meltdown, but the performance impact is substantial.

What about Spectre?

Spectre is not the same as Meltdown. Although Spectre works against all CPUs, including Intel, AMD, and ARM, pulling off a Spectre attack is trickier. But both are equally bad and rely on out-of-order execution, which is why you usually see them lumped together.

You don’t need to replace your hardware

You don’t need to replace your hardware, and anyone who says you do is either trying to tell you something. Now for the obvious part. To mitigate Meltdown and Spectre, install the latest security patches for your operating system, BIOS, and CPU firmware — after sufficient testing, of course.

[amazon_link asins=’1484200659,1484224027′ template=’ProductCarousel’ store=’benpiperblog-20′ marketplace=’US’ link_id=’f5492b54-f283-11e7-b66c-83ded4902b58′]

3 Ways to Increase Your IT Earnings in 2018

As 2018 draws near, companies go into hiring mode, and people come and go, which often leaves a lot of open positions. If you qualify to fill one of the more in-demand positions, you can often negotiate a higher salary.

My biggest salary jumps have always come in the first quarter of the year. To increase your chances of getting that salary boost, here are three tips that you should start implementing right now.


Tip #1 – Shun the Snake Oil Tech Fads

These are technologies that sound interesting, seem promising, but either have no real-world use case or are actually impossible. Some current examples include blockchain and quantum computing. If you’re interested in these from a theoretical perspective, by all means, indulge yourself. But don’t expect that a real company is going to hire you as a blockchain or quantum computing expert. These are fads, and like all fads, they’ll die. Don’t let your career die with them.

An easy way to spot nonsense tech fads is to ask yourself, “Is this new technology an improvement over what we have now? If so, is it even possible?” Clearly, blockchain isn’t an improvement over any other database, distributed or otherwise. Quantum computing could theoretically blow classical computing out of the water, but quantum computers require temperatures close to absolute zero, making them practically impossible.

Another tech fad that’s captured the attention of the media is artificial intelligence (AI). Not to be confused with machine learning, the AI hype claims that computers will somehow begin working as good as or better than the human brain, perhaps even to the point of developing consciousness and understanding. Machine learning, on the other hand, deals with statistical analysis and making predictions based on large data sets. It has nothing to do with mimicking the human brain or consciousness.


Tip #2 – Get Certified

Rid your mind of the tripe that “certifications are just paper” and “they don’t prove that you know anything.” The fact is that more certifications = more money. But you have to get certified. Just taking courses isn’t enough. I’ve interviewed people whose resumes listed what courses they took, but they didn’t have the corresponding cert. Don’t do this. It’s a huge strike against you. Take all the courses you need to attain the cert, but then go and get it.

Here are some of the most lucrative and in-demand certification categories going into 2018:

Cloud and networking

Three of the most popular certifications are the AWS Certified Solutions Architect – Associate, and the Cisco Certified Network Associate (CCNA) and Cisco Certified Network Professional (CCNP). There’s no reason you can’t get two of these within the next 3 months.

Hybrid cloud and on-prem virtualization

The Citrix Certified Associate – Virtualization (CCA-V) and Citrix Certified Professional (CCP-V) are evergreen certifications that pertain to both cloud and on-prem virtualization and networking skills. Just having the word “Citrix” on your resume is huge. Having one of the certs is even better. With the right training, you should be able to study for and achieve one of these during the first part of the year.


Information security (infosec) is hot, and it gets hotter with every Equifax hack. The Certified Information Systems Security Professional (CISSP) is a very lucrative certification that’s difficult to achieve. You won’t get it in 3 months. But if you’re dedicated and put in the time to attain it, you can write your own ticket.

How to study

Pluralsight has dozens of courses covering all of these certifications, and you can get unlimited access with a free trial. The courses also have practice exams integrated into the learning experience.


Tip #3 – Update your resume

Update your resume at least once a year. Remove references to obsolete technologies. People may chuckle at your references to Banyan Vines and Windows NT, but those won’t get you an interview. Needless to say, add any new technologies you’ve had a hand in implementing.

Put your certifications front and center on your resume. Put them on your LinkedIn, Twitter, Backchat, Kindler, McSpace, and whatever other job boards you use. Make sure people know you have them. It might seem a little braggy, but it will sharpen the distinction between you and everyone else who doesn’t have them.

Resumes might seem old school, but they’re still important because recruiters literally just Ctrl+F through them searching for various keywords. And guess what keywords they’re looking for. Terms like AWS, Citrix, CCNP, CCNA, Cisco, security, networking, TCP/IP, cloud, etc. Many recruiters don’t know what any of that stuff is, nor do they care. They just want to find someone who has those certs and skills!

Let it be you.

4 Inconvenient but Effective Security Measures

Security usually requires sacrificing convenience (or money). So naturally, we tend to get away with as little security as possible. But if you’re a glutton for punishment, here are 4 very inconvenient but highly effective measures you can take right now to protect yourself from the  evils lurking on the interwebs.

Disable JavaScript

Yeah, I know. Every site made since the Web 2.0 days needs JavaScript just for a text input field to work right. It’s a shame, really. But disabling JavaScript isn’t an all-or-nothing deal. Browser extensions lets you allow JavaScript for sites you trust and block them for all others. If you’re still using Firefox, you can use the NoScript extension. Chrome users, check out uMatrix.

Disable XSS

Cross-site scripting (XSS) occurs when you go to one website and it loads JavaScript from a different domain. Sadly, this practice has become normal with the advent of CDNs. What makes it risky is not so much the cross-site request, but the fact that it happens without you knowing it. You don’t see that legitwebsite.com is loading Nasty.js from someone’s hijacked blog site. Using one of the script blocking extensions I just mentioned will warn you about XSS and let you decide whether to allow it. I’ve stopped several malicious scripts this way over the years.

Block wide categories of websites

Taking a whitelist approach is too inconvenient, as there are just way too many sites out there to keep up with. Your next best option is to use content-based filtering to block websites by category. I use OpenDNS to achieve this, and below is my current list of blocked categories. As you can see, it’s pretty broad.

This list covers some sites I don’t want blocked, so I allow those on a case-by-case basis. You might notice that some of these categories tend to carry malware more than others, so blocking them wholesale is a pretty effective way to avoid fallout from clicking the wrong link.

Don’t use the app

Smartphones normalized a concept that would’ve been considered bizarre just a few years ago: installing an app for every website you use regularly. We’ve got Twitter, Facebook, Gmail, LinkedIn, etc. Can you imagine installing “the MySpace app” on your Windows XP machine in 2005? Some sites just don’t need an app. You can browse to them on your phone and they work fine.

When you install an app, you usually give it permissions to various system resources – photos, call logs, camera, microphone, etc. Chances are the core functionality doesn’t require most of those. If that app has a vulnerability – or worse, malicious code – then you’ve just turned your phone into a neat little hacker toolkit.


Blockchain is a Passing Fad

Whenever a tech fad comes to an end, it becomes so obvious why it failed. Yet during the hype, it’s easy to miss the problems lurking just below the surface. I want to explore some of the problems I see with public blockchain and why I think it’s not going to live up to the hype.

Blockchain can’t track real things

Whenever a new technology comes along, there’s always a temptation to use it in ways above and beyond it was originally intended. Blockchain came to popularity because of Bitcoin, and as Bitcoin grew, people became fascinated by its underlying technology.

But what made Bitcoin popular wasn’t the technology. The whole idea behind Bitcoin was to create a global currency that didn’t have a central monetary authority. Blockchain was just a good means to achieve that.

The fact that blockchain works well for cryptocurrency doesn’t mean that it works well for any sort of transactional database. The idea of digitally moving funds from one account to another doesn’t translate to moving goods along a supply chain. Why not? Because with Bitcoin, the blockchain is currency that you’re moving around. You can’t separate a Bitcoin from the blockchain. If you do, the Bitcoin ceases to exist.

When it comes to supply chain, you’re only moving representations of goods, not the goods themselves. This is a key distinction that people miss. You can assert that a certain string of data represents a tangible thing in the real world, but now that linkage is based on your assertion, not on the blockchain itself. Hence, the blockchain doesn’t add much value.

Anyone can create a blockchain

A blockchain is a database, and as anyone who has dealt with those knows, a database is worthless if no one uses it. There are hundreds, probably thousands of different blockchains. If people who work together every day can’t even agree on where to eat lunch, how is everyone in the world going to agree on a single blockchain for any given application? It’s not going to happen.

Ridiculous bandwidth and storage requirements

Right now blockchain is being touted as a security panacea, especially for IoT. There’s just one big problem: IoT devices have small storage and bandwidth capacity, and blockchain requires enormous amounts of storage and bandwidth.

Inconvenient but not more secure

Security always requires giving up some convenience. But the inverse isn’t necessarily true. When I go to pay for my coffee, I can use a piece of plastic, cash, or scan a barcode on my phone. It’s convenient and mostly secure. But if I want to pay with Bitcoin or some other cryptocurrency, I have to drop some bits onto a blockchain and wait minutes or even hours for the hivemind to “confirm” my transaction.

And what benefit do I get in return? Nothing. No, it’s worse than nothing. I lose my ability to dispute the transaction or get a refund because blockchains are designed to be unchangeable (aka immutable).

Controlled by anonymous

Public blockchains are not inherently decentralized. Distributed, yes. Decentralized, no.

When dealing with a credit or debit card, your bank is in charge of keeping track of the transactions. When dealing with cash, keeping up with your spending is entirely up to you. But when it comes to blockchain, thousands of anonymous strangers are in charge of your transactions.

These anonymous strangers are divided into two groups. You’ve got the developers who create and maintain the software required to interact with the blockchain. This gives them the power to change it in any way they see fit, as well as allow or disallow other people to use it (this actually happened recently with the Bitcoin Core/Cash split).

The other group is the people running the nodes which perform validation of blockchain transactions. Ideally this would be a diverse group of honest people spread all over the world. But the reality is that anyone with enough money (e.g. gov’t) can purchase the compute power to comprise the majority of nodes. Whoever controls the majority of nodes controls the blockchain.

This is arguably the biggest strike against public blockchains because it’s not just a theoretical possibility. It’s already happened. 70% of Bitcoin mining is done in China, only 1% in the US.

Ripe for attack

Even if you assume that most people are honest and will operate clean nodes, there’s still the small problem of security. Imagine that former Soviet spies Boris and Natasha develop a worm targeting a particular blockchain implementation like Ethereum. They’re so 1337 that they manage to infect 80% of the nodes, allowing them to inject bogus data into the chain and validate it.

Don’t underestimate the fallout of this. Even if the participants discover the attack quickly, the damage has already been done. Everyone else now has to face the ugly decision of whether to trust a blockchain they know has already been compromised. This isn’t just a theoretical scenario. Something similar already happened with Ethereum. It resulted in the developers forking the Ethereum chain. That’s why we now have two Ethereums (ETH and ETC).

Architected insecurely

The Boris and Natasha scenario might sound a little bit too spy-movie-ish, but the nature of a public blockchain requires it to be open to the internet. This isn’t a private database locked down behind layers of security. It’s a peer-to-peer app that is more than happy to accept your malformed TCP packet.

Does that mean it’s impossible to implement a secure, public blockchain? No. But it does mean that it’s much, much harder than to just use a private database behind more proven layers of security. Once again, why not just use a traditional database? Blockchain doesn’t offer enough of an advantage to outweigh the risks.


Yes, You Need IT Certifications

Certifications are often lambasted as “worthless pieces of paper” and “experience is more important.” But for some people, certifications are more important than experience.

A substitute for experience

Newcomers to the IT world face the classic problem: how do you get experience without a job? Sure, you can tinker around on your own time, but how do you prove that experience? That’s where certifications come in.

Certifications show a prospective employer that you care enough and have the initiative to spend your own time and money to become a better IT professional. You might have tons of experience with IT as a hobby. But how do you prove that?

With a piece of paper.

Certifications get you hired

They are what get your resume looked at, instead of being tossed into the shredder by HR.

They are what get you the interview.

They are the tie-breaker between you and that other equally qualified person who doesn’t have a cert.

If you have a stack of certifications under your belt, you’re going to be a step ahead of the naysayers who think certifications are a joke, a scam, or a racket.

Certifications mean higher pay

My first IT certification was the CompTIA A+ in 2002. That helped me land one very low paying job. During that time, I also got my Microsoft MCSA.

Fast-forward a few years. I got a job at a local technology reseller where I earned my Network+, Cisco CCNA, CCDA, and finally my CCNP, all within a year. Shortly after that, I was able to get a job that almost doubled my salary.

A couple years later, I got my Citrix CCA. My salary went up by 50%. It increased a few percent each year thereafter.

Oh, and I forgot to mention: no college degree.

You’re always a beginner

Even if you’ve been in the field for 20 years, you’re always a beginner when it comes to emerging technologies. You can work your tail off to get experience with the latest and greatest, but if you want to turn that experience into a raise or new position, you have to prove your skills.

When you put in your resume against someone fresh out of college – and they have that highly sought after certification and you don’t – well, you can guess who’s getting the callback.

Containers are Virtual Machines After All

Are containers virtual machines or not?

There’s a common analogy that VMs are like houses and containers are like apartments. And you are the application. When you live in a house, you have free rein to do as you please. When you live in an apartment, you have to share certain spaces, and parts of the building are off-limits. Interestingly, this analogy suggests that the difference between containers and VMs is not one of architecture but of implementation!

Apartment buildings and houses both have rooms, water, electric, roofs, and doors. Containers and VMs both virtualize compute, storage, networking, and memory. So what’s the difference, really?

Shared Almost Nothing

Those who say that a container is not a virtual machine are quick to note that all containers on a host share the same kernel. Not a copy of the same kernel, but the exact same kernel running on the host.

Virtual machines, on the other hand, have completely isolated kernels. If you’re running 100 identical Linux VMs, each VM has its own unique copy of the same kernel. You can upgrade the kernel in one, and it doesn’t affect any others. That seems like a pretty significant difference. But let’s look a little deeper.

Containers do use virtualization, but rather than fully virtualizing compute, network, storage, and memory, containers do something a little different. They fully virtualize the compute and networking portion. But storage and memory, on the other hand, are mostly but not completely virtualized. The kernel is stored on the host, and the container is given read-only access to it. The rest of a container’s storage and memory are virtualized.

In a container, the operating system is necessarily separated from the rest of the VM. But that hardly means containers aren’t VMs.

VMs that Look Like Containers?

With a little tweaking, a traditional virtual machine could meet the criteria of a container. One could, for example, create a shared, read-only filesystem containing a Linux kernel and have multiple VMware virtual machines booting from it. As those VMs boot, VMware could identify the duplicate virtual memory blocks and deduplicate them. Those VMs wouldn’t cease to be virtual machines just because they’re all sharing a kernel.

Let’s take a more realistic example: multiple virtual machines booting Kali Linux from a shared, read-only ISO. Each VM has its own virtual disk for persistence, but the operating system kernel is shared. Is that a container?

Containers Were Born On Linux

There’s a reason containers were born on Linux and not Windows. The Linux architecture lends itself to the clean separation of the kernel from everything else. Windows, on the other hand, just mashes it all together. That’s why Windows and *nix went into opposite directions. To get more efficient use of system resources, FreeBSD got jails, and Linux got OpenVZ, LXC, and eventually Docker. Windows, on the other hand, just got full-on x86 virtualized.

Containers Mimic Physical Machines

As I said in an earlier post, virtualization is mimicry. Containers present applications with compute, memory, storage, and networking, and control how applications can use those resources. Virtual machines do the exact same thing. Not only that, you can start, stop, pause, and shutdown containers, just like with VMs!

At this point, it’s starting to become clear that the earlier analogy – VMs being like houses – is flawed. Virtual machines aren’t like houses per se, but like buildings in general. Containers are a particular implementation of virtual machine, like an apartment is a particular instance of a building.

Containers Aren’t Those VMs

Let me be clear that I’m not suggesting that people who say containers aren’t VMs are wrong.

When people say containers aren’t VMs, they’re trying to explain that a container is not the kind of virtual machine you’d find in VMware or Hyper-V. It’s not the type of VM you attach an ISO to, boot up, and install an operating system on. It’s perfectly valid to insist that containers are not those types of virtual machines because, well, they’re not. They’re very different. My goal here is to highlight the similarities so that the differences become more apparent.

Also, I’m not saying containers are bad. Far from it. I’ve been using Docker since 2014 and I love it. And I’m excited to see how Docker for Windows Server will fare. Application virtualization has been a holy grail of Windows for a long time, and Docker just might finally deliver it.

A Quick and Dirty Review of AWS Diagramming Software

I’m trying out different services to import an AWS environment and turn it into a technically correct and aesthetically pleasing diagram. Surprisingly, although most of the services can correctly identify the resources, none of them are able to identify network connections. If you’re hoping for something to autogenerate detailed visual documentation of your AWS environment, well, sorry, but we’re not there yet. However, if you’re okay with copy/paste and making some manual tweaks, one of these services might be right for you.




By default, the import function doesn’t actually create a diagram for you. It just imports the elements and it’s up to you to arrange them.

If you use the auto layout feature (beta), each VPC gets put on a separate page, and the diagram doesn’t show VPC peerings.



Free for a single user

The auto layout feature (did I mention it’s in beta?) arranges resources for you, placing each VPC on a separate page. Copying and pasting from one page to another is seamless.

Clear instructions to help you setup an IAM user and policy for importing from AWS

Rating: 3/5




Each VPC has its own diagram, and there doesn’t seem to be a way to edit them.

If you want to export diagrams, you have to pay $39/month.

The signup and login forms are quirky and make odd asynchronous requests to the far reaches of the interwebs for no apparent reason. If you use any sort of security software, you may have a problem even signing up.


They offer a free 2-week trial without requiring a credit card.

The diagrams are detailed and give you the option to show or hide resource names.

Rating: 3/5




You can’t import from AWS unless you sign up for the Pro Solo plan ($49/month). They require a credit card up front before you can even try the service out.


The diagrams are 3-D and have a game board feel. As you add resources to the diagram, it tells you the monthly AWS cost based on usage.

The signup process is super easy, requiring only a name, email and password. You don’t even have to confirm your email before you can use the service.

Rating: 2/5

[amazon_link asins=’1617294446,1119138558′ template=’ProductCarousel’ store=’benpiperbloginline-20′ marketplace=’US’ link_id=’11dfc6ce-f7c7-11e7-b0d3-1bc94e105388′]

Promising contenders that need work

VisualOps.io looked great, but the signup form wouldn’t submit.


Software/services that don’t offer an AWS import feature

Cacoo, ConceptDraw, Creately, and Solution Assembly Line do not offer an AWS import feature.


The Winner?

There is no winner yet. All of the services I reviewed more-or-less mimic Amazon’s own high-level documentation diagrams. Such diagrams are useful, but they’re too high-level to suffice as technical documentation. What’s needed are diagrams that reflect the underlying AWS architecture, and I recognize that’s not an easy task. Such diagrams are far more complex than “this box goes inside of that box.” For now, if you want an AWS diagram that meets your needs, someone is going to have to do it by hand.

It’s Time to Stop Using the Term Network Function Virtualization (NFV)

I think it’s time to stop using the term “network function virtualization”. Why? Because it doesn’t exist, at least not in the way the term suggests. The term is a category error, and when people try to make sense of the term, confusion and frustration ensue.

Think of it like this: what’s the difference between a “virtual network function” and a “non-virtual network function”? For example, how is “virtual IP forwarding” different than “non-virtual IP forwarding?” Answer: it’s not.

So what then exactly is network function virtualization?

The Right Idea, The Wrong Term

The European Telecommunications Standards Institute, which arguably coined the term NFV, said the following in a 2012 whitepaper (emphasis mine):

Network Functions Virtualisation aims to address these problems by leveraging standard IT virtualisation technology to consolidate many network equipment types onto industry standard high volume servers

Look at the bold text. How does one consolidate many network equipment types onto commodity servers? Let’s add some specifics to make it more concrete. How does one consolidate a firewall, router, switch, and load-balancer onto a server? By implementing those network functions in software and putting that software on the server.

But here’s the problem with calling that “network function virtualization”: virtualization has nothing to do with implementing network functions in software. In the early days of the Internet, routers (gateways as they were called back then) ran on commodity x86 machines with no virtualization (with the exception, maybe, of virtual memory).

Network functions don’t need virtualizing, and in fact, can’t be virtualized. But the term NFV suggests otherwise.

And that’s where the confusion started….

NFV is like dividing by zero: undefined

Conceptually, NFV is just implementing network functions in software. That’s easy enough to understand. And yet it’s hard to find an actual definition of it anywhere. Instead, you’ll see a lot of hand-wavy things like this:

NFV is a virtual networking concept…
NFV is a network architecture concept that uses the technologies of IT virtualization…

Hence the letters “N” and “V”. And then you have those who gave up on a definition and just went straight for the marketing lingo:

NFV is the next step…
…is the future…
…is the progression/evolution…

Others get closer by hinting at what NFV does, but stop short of actually saying what it is:

NFV consolidates multiple network functions onto industry standard equipment

This seems to be pretty close, but where’s the virtualization part come in? Let’s try this blurb from Angela Karl at TechGenix:

[NFV lets] service providers and operators… abstract network services, including things such as load balancing, into a software that can run on basic server.

Bingo. NFV is not virtualizaton at all. It’s an abstraction of network functions!

NFV is Abstraction, not Virtualization

Before you accuse me of splitting hairs, let me explain the distinction between virtualization and abstraction. Put simply, virtualization is an imitation, while abstraction is a disguise.

Virtualization is an imitation

When you virtualize something, you’re creating an imitation of the thing you’re virtualizing.

For example, when you create a virtual disk in your favorite hypervisor, you’re hiding the characteristics of the underlying storage (disk geometry, partition info, formatting, interface, etc.). But in the same motion, you give the virtual disk the same types of characteristics: disk geometry, partition info, formatting, interface, and so on. To put it in programming lingo, the properties are the same, but the values are different.

Virtualization preserves the underlying properties and doesn’t add any property that’s not already there. Have you ever pinged a virtual disk? Probably not, because virtual disks, like real disks, don’t have network stacks.

Virtualization also preserves the behavior of the thing being virtualized. That’s why you can “shut down” and “power off” virtual machines and “format” and “repartition” virtual disks.

Now try fitting NFV into this definition of virtualization. How do you “virtually route” or “virtually block” a packet? It’s a category error.

Abstraction is a disguise

When you create an abstraction, you’re creating a disguise. Unlike virtualization, with abstraction you’re changing some of the properties of the thing you’re abstracting. You’re taking something and dressing it up to look and act completely different.

Swap space is a good example of an abstraction. It’s data on storage that looks and acts like random access memory (but way slower). Before the days of SSDs, swap was stored on spinning disks which were read and written sequentially. This is completely different than memory which can be read and written randomly. Swap space is a file (Windows) or partition (Linux) disguised as RAM.

The Case for Abstracting Network Functions

Let’s bring this around to networking. What’s it mean to abstract network functions like IP routing and traffic filtering? More importantly, why would you want to? Why not just use virtual routers, switches, and firewalls?

Simply put, virtualized network devices don’t scale. The reasons for this are too numerous to list here, but suffice it to say that TCP/IP and Ethernet networks have a lot of built-in waste and aren’t the most efficient. This is why cloud providers do network function abstraction to an extreme. It’s utterly necessary. Let’s take Amazon AWS as an example.

In AWS, an instance has a virtual network interface. But what’s that virtual network interface connected to? A virtual switch? Nope. Virtual router? Try again. A virtual firewall. Negative. Virtual routers, switches, and firewalls don’t exist on the AWS platform. So the question remains: what’s that virtual NIC connected to?

The answer: nothing. The word “connected” here is a virtual concept borrowed from the real world. You “connect” NICs to switches. In your favorite hypervisor, you “connect” a vNIC to a vSwitch.

But there are no virtual switches or routers in this cloud. They’ve been abstracted into network functions. AWS presents this as if you’re connecting a virtual interface to a “subnet” rather than a router. That’s because AWS has abstracted IP routing away from you, leaving you with nothing to “connect” to. After all, we’re dealing with data. Not devices. Not even virtual devices. So what happens? The virtual NIC passes its traffic to some software that performs network functions. This software does a number of things:

  • Switching – It looks at the Ethernet frame and checks the destination MAC address. If the frame contains an ARP request seeking the default gateway, it replies.
  • Traffic Filtering – If it’s a unicast for the default gateway, it looks at the IP header and checks the destination against the security group rules, NACLs, and routing rules.
  • Routing – If it needs to forward the packet, it forwards it (although forwarding may simply consist of passing it off to another function.)

This is a massive oversimplification, of course, but you get the idea. There’s no reason to “virtualize” anything here because all you’re doing is manipulating bits!

Overvirtualizing the Network

It’s possible to over-virtualize. To give an analogy, suppose you wanted to write a calculator application (let’s call it a virtual calculator). You’d draw a little box with numbers and operators, and let the user click the buttons to perform a calculation. Now imagine that you also decided to write a “virtual hand” application that virtually pressed buttons on the virtual calculator. That would be ridiculous, but that’s essentially what happens when you connect two virtual network devices together.

There an especially great temptation to do this in the cloud. Folks may spin up virtual firewalls, cluster them together, connect them to virtual load-balancers, IDSes, and whatnot. That’s not bad or technically wrong, but in many cases it’s just unnecessary. All of those network functions can be performed in software, without the additional complexity of virtual NICs connecting to this and that.

The Difference Between a Virtual Network Device and a Network Function

When it comes to the cloud, it’s not always clear what you’re looking at. Here are some questions I ask to figure out whether a thing in the cloud is a virtual device or just a abstracted network function:

Is there an obvious real world analog?

There’s a continuum here. An instance has a clear real world analog: a virtual machine. An Internet gateway sounds awfully like the router your ISP puts at your site, but “connecting” to it is a bit hand-wavy. You don’t get a next-hop IP or interface. Instead, your next hop is igw- followed by some gibberish. That smacks of an abstraction to me.

Can you view the MAC address table or create bogus ARP entries?

If  you can, it’s a virtual device (maybe just a Linux VM). If not, it’s likely some voodoo done in software.

Can you blackhole routes?

In AWS you can create blackhole routes, although people usually do it by accident. You can create a route with an internet gateway as a next hop, then delete the gateway. But can you create a route pointing to null0? If not, you have an abstraction, not a virtual device.

Does the TTL get decremented at each hop?

A TTL in an overlay can get decremented based on the hops in the underlay. But what I’m talking about here is not decrementing the TTL when you normally would. AWS doesn’t decrement the TTL at each hop. If you were to get into a routing loop, you’d have a nasty problem. Hence, AWS doesn’t allow transitive routing through its VPCs. So if your TTLs don’t go down at each hop, as with AWS, you’re probably dealing with an abstraction.


Visual Studio Code as a PowerShell Integrated Scripting Environment

I know what you’re thinking. “Why use Visual Studio Code instead of the PowerShell ISE?” Well, if you’re using Mac OS or Linux, you don’t have the option to use the PowerShell ISE natively. And that’s a problem if you want to take advantage of the cross-platform capabilities of PowerShell Core. In this article, I’ll show you how to use Visual Studio Code (free!) to perform the key functions of the PowerShell ISE, namely:

  • Simultaneously view code and execute it in the PoSh terminal
  • Execute code on a selection or line-by-line basis (F8)
  • Syntax highlighting (for people who are easily bored like me)

Installing Visual Studio Code

The best way to install most things is with a package manager. This would be something like apt-get or yum for Linux distros, homebrew for Mac OS, and Chocolatey for Windows. Or you could go old school and download it here.

Installing the PowerShell Extension

Go to the Extensions button (looks like a busted up square) or View > Extensions.

If the PowerShell extension doesn’t show up under the bombastic “RECOMMENDED” heading, just search for it in the “Search Extensions” field. Then install it.

Integrating the PowerShell Terminal

Open up a PowerShell script of your choice. Then in the menu, go to View > Integrated Terminal. You should see the following.

If you don’t see the PS prompt Make sure you select “TERMINAL” and “PowerShell Integrated” from the drop-down menu.

Running Only Selected Code

In the PowerShell ISE, you can select a block of text and hit F8, and the ISE runs only that code. Or you can position your cursor at the end of a line and hit F8, and the ISE runs only the code on that line. Next we’ll enable the exact same behavior in Visual Studio Code.

Go To File > Preferences > Keyboard Shortcut

In the text entry field at the top, type “runsel”. You should see two items:

  • “Run Selection” with a keybinding to F8
  • “Run Selected Text In Active Terminal” with no keybinding

This is not what we want because it will not run the selected text in the PowerShell terminal. It will run the selection in the “OUTPUT” section, but not in the terminal. Obviously, that’s not the normal behavior of the PowerShell ISE. Let’s fix it.

Right-click the “Run Selection” item and select “Remove keybinding”

Right-click the “Run Selected Text in Active Terminal” item and select “Add Keybinding”

Depress the F8 key (or whatever you want to use) then hit Enter.

Testing It Out

Go back to your code and select a block of code. Hit F8 and watch the magic!

That’s what I’m talking about!

But… as of this writing, there’s an issue with this that’s being tracked on the vscode-powershell GitHub repo, and it’s this: multi-line input in the integrated console doesn’t work. That means you can’t select a function block, hit F8, and have it work. It will throw ugly errors in your face.

Ready to learn more PowerShell? Get unlimited access to Pluralsight’s entire course library!