Archive for infrastructure

Updating an Auto-Scaled BIG-IP VE WAF in AWS

Posted in security, f5, big-ip, cloud computing, infrastructure, waf, aws by psilva on May 23rd, 2017

Update servers while continuing to process application traffic.

Recently we've been showing how to deploy BIG-IP (and F5 WAF) in various clouds like Azure and AWS.

Today, we’ll take a look at how to update an AWS auto-scaled BIG-IP VEBIG-IP VE web application firewall (WAF) that was initially created by using this F5 github template. This solution implements auto-scaling of BIG-IP Virtual Edition (VE) Web Application Firewall (WAF) systems in Amazon Web Services. The BIG-IP VEs have the Local Traffic Manager (LTM) and Application Security Manager (ASM) modules enabled to provide advanced traffic management and web application security functionality. As traffic increases or decreases, the number of BIG-IP VE WAF instances automatically increases or decreases accordingly.

Prerequisites:

asw1.jpg

So, let’s assume you used the CFT to create a BIG-IP WAF in front of your application servers…and your business is so successful that you need to be able to process more traffic. You do not need to tear down your deployment and start over – you can make changes to your current deployment while the WAF is still running and protecting your environment.

For this article, a few examples of things you can change include increasing the throughput limit. For instance, When you first configured the WAF, you choose a specific throughput limit for BIG-IP. You can update that. You may also have selected a smaller AWS instance size and now want to choose a larger AWS instance type and add more CPU. Or, you may have set up your auto-scaling group to launch a maximum of two instances and now you want to be able to update the auto-scaling group attributes and add three.

This is all possible so let’s check it out.

The first thing we want to do is connect to one of the BIG-IP VE instances and save the latest configuration. We open putty, login and run the TMSH command (save /sys ucs /var/tmp/original.ucs) to save the UCS config file.

asw2.jpg

Then we use WinSCP to copy the UCS files to the desktop. You can use whatever application you like and copy the file wherever you like as this is just a temporary location.

asw3.jpg

Once that’s done, open the AWS Management Console and go to the S3 bucket. This bucket was created when you first deployed the CFT and locate yours.

asw456.jpg

When you find your file, click it and then click the Backup folder.

asw7.jpg

Once there, now upload the UCS file into that folder.

asw89.jpg

The USC is now in the folder.

asw91.jpg

The last step is to redeploy the CFT and change the selected options. From the main AWS Management Console, click CloudFormation, select your Stack and under Actions, click Update Stack.

asw9293.jpg

Next, you can see the template we originally deployed and to update, click Next.

asw94.jpg

Scroll down the page to Instance Configuration to change the instance type size.

asw95.jpg

Right under that is Maximum Throughput to update the throughput limit.

asw96.jpg

And a little further down under Auto Scaling Configuration is where you can update the max number of instances. When done click Next at the bottom of the page.

asw97.jpg

It’ll ask you to review and confirm the changes. Click Update.

asw9899.jpg

You can watch the progress and if your current BIG-IP VE instance is actively processing traffic, it will remain active until the new instance is ready.  Give it a little time to ensure the new instance is up and added to the auto scaling group before we terminate the other instance.

asw991.jpg

When it is done, we’ll confirm a few things.

Go to the EC2 Dashboard and check the running instances. We can see the old instance is terminated and the new instance is now available. You can also check the instance size and within the auto scaling group you can see the new maximum for number of instances.

asw99234.jpg

And we’re deployed.

You can follow this same workflow to update other attributes of your F5 WAF. This allows you to update your servers while continuing to process traffic.

Thanks to our TechPubs group, you can also watch the video demo.

ps

Related:

 




Q/A with Betsson’s Patrik Jonsson - DevCentral’s Featured Member for April

Posted in f5, big-ip, silva, devcentral, infrastructure, games by psilva on April 4th, 2017

 

 

patrik.jpgPatrik Jonsson lives in Stockholm with his wife and son and works as a network engineer for a company providing online casino games across the world.

Outside work, he likes to spend time with his family, play around with his home VMware lab and enjoys watching movies. He also loves travelling and having a beer with friends.

Patrik is also a 2017 DevCentral MVP and DevCentral’s Featured Member for April! DevCentral got a chance to talk with Patrik about his work, life and his project the BIG-IP Report.

DevCentral: You’ve been a very active contributor to the DevCentral community and wondered what keeps you involved?

Patrik: One of the best, and fun ways to learn new things is to take on problems, or discussions presented by fellow technicians. It forces you to continuously challenge what you think you know and keeps your knowledge up to date. In addition, when I need input, or help myself, DevCentral has so many brilliant and helpful members ready to take on whatever you throw at them.

DC: Tell us a little about the areas of BIG-IP expertise you have.

PJ: The first time I ran into a BIG-IP was just after I graduated from university. It was a 1000 series running BIG-IP v4. When I quit that job 6 years later I considered asking to bring it home with me, but somehow my girlfriend at the time was not as keen to the idea. Still don’t know why. :-)

I’ve been working mostly with BIG-IP LTM and iControl, but recently I’ve started to dabble a bit with APM, GTM/DNS and ASM as well.

DC: You are a Network Security Specialist at Betsson. Can you describe your typical workday?

PJ: At Betsson you never know what’s going to happen when you step into the office. The gaming industry has very tough competition and getting comfortable as one of the bigger players around is not an option since rivals are always ready to take your place. This, combined with awesome colleagues, makes it a joy to step into the office every morning.

DC: Describe one of your biggest BIG-IP challenges and how DevCentral helped in that situation.


betsson_logo.jpgPJ:
Being a multinational company with offices supporting multiple brands, one of the biggest challenges we have is knowledge sharing. Giving the developers the correct information when they need it is vital for an efficient application delivery. In order to provide this, we have used iRules to present troubleshooting information in the form of custom headers so developers can see which pool and member that responded to their request and the current status of all members. We also have a smarter version of the traditional sorry page which shows information about the failed pool and what’s being monitored. And then of course, BIG-IP Report.

All of these are using iRules and iControl and would not have been possible without the DevCentral API documentation and of course, my hero Joe Pruitt.

DC: What can readers learn from your blog: https://loadbalancing.se/ and what is the BIG-IP Report?

PJ: My blog is where I post ideas and projects that I have. There’s a BIG-IP APM + Google Authenticator guide, F5 Web UI augmentation script for version 11 and a few other things.

BIG-IP Report was born out of a need to show people the load balancing configuration in a simple manner without giving them access to the BIG-IP interface. After implementing it we have gone from developers asking us where things are, to instead them telling us about bad configuration. We also discovered that it is awesome for us as well, as we can get an overview of the configuration across multiple devices. Finding a specific VIP, or pool is so much easier when the information is in one place.

I guess the best way to understand it is to try it at http://loadbalancing.se/bigipreportdemo/

The blog is not updated that often, so it’s safe to subscribe without getting too much spam.

DC: Lastly, if you weren’t an IT admin – what would be your dream job? Or better, when you were a kid – what did you want to be when you grew up?

PJ: I think my dream would be working with a non-profit organization helping people in need. I love travelling and combining that with something meaningful would be really nice.

Thanks Patrik! Check out all of Patrik’s DevCentral contributions, check out his blog, or connect on LinkedIn. And visit Betsson on the web or follow on Twitter.

 

 




Lightboard Lessons: Service Consolidation on BIG-IP

Posted in f5, adc, silva, application delivery, lightboard, devcentral, infrastructure, consolidate by psilva on March 29th, 2017

The Consolidation of point devices and services in your datacenter or cloud can help with cost, complexity, efficiency, management, provisioning and troubleshooting your infrastructure and systems.

In this Lightboard Lesson, I light up many of the services you can consolidate on BIG-IP.

ps

 

Watch Now:



What is Virtual Desktop Infrastructure (VDI)

Posted in security, big-ip, cloud computing, mobile, vdi, devcentral, infrastructure, access by psilva on March 8th, 2017

devcentral_basics_article_banner.png

What is VDI?

vdicon.jpgImagine not having to carry around a laptop or be sitting in a cubicle to access your work desktop applications. Virtual desktop infrastructure (VDI) is appealing to many different constituencies because it combines the benefits of anywhere access with desktop support improvements.

Employees typically use a wide range of mobile devices from laptops to tablets and from desktops to smartphones are being used. The diversity of these mobile devices and the sheer number of them in the workplace can overwhelm IT and strain your resources.

Desktop Virtualization centralizes sets of desktops, usually in a data center or cloud environment, and then provide access to your employees whether they are in the office, at home or mobile.  VDI deployments virtualize user desktops by delivering them to distinctive endpoint devices over the network from a central location. There are many reasons why organizations deploy VDI solutions – it’s easier for IT to manage, it can reduce capital expenditures, improve security and helps companies run a ‘greener’ business.

Since users’ primary work tools are now located in a data center rather than on their own local machines, VDI can strain network resources, and the user experience can be negatively affected. Desktop virtualization is a bit more complex than server virtualization since it requires more network infrastructure, servers, server administrators, authentication systems, and storage. VDI’s effect on the network is significant; it may necessitate infrastructure changes to accommodate the large volume of client information that will be traversing the network. When a user’s desktop moves from a physical machine under the desk to the data center, the user experience becomes paramount; a poor VDI deployment will result in IT being flooded with “My desktop is too slow” calls.

DIAG-ARCH-AVAIL-16553-vdi_1_.png

Why VDI?

Mobile devices and bring your own computing are popular drivers for VDI deployments.  It enables employees to work from anywhere and simplifies/unifies desktop management, especially updating operating systems and applications.  It can lower costs, provide flexible remote access; improve security and compliance along with potentially offering organizations disaster recovery options.  It also enables employee flexibility and reduced IT risk of employee owned devices. VDI allows employees work with a wide range of devices from laptops to tablets to smartphones.  Employees can sign on from wherever they are, whenever they like and with whichever device they choose.

Deploying virtual desktops can also increase IT efficiency and reduce IT workload since the desktops are centralized.  It also benefits IT with greater access and compliance control, while at the same time, allowing employees the freedom to use their mobile device of choice. IT departments can remove obsolete versions of application software or perhaps enhance the security policy. Either way, the employee always has the most up to date desktop image.

Things to Consider

Desktop virtualization is no longer about the desktop, it’s about allowing employees desktop access from wherever they are. So things like availability, access, security, DR, authentication, storage, network latency and SSO are all areas to keep in mind when deploying a VDI solution.

VDI Providers

Some VDI solutions include VMware View, Citrix XenDesktop, and Microsoft RDS.

Next Steps

If you'd like to learn more or dig deeper into VDI, here are some additional resources:

Also, here are some other articles from the #Basics Series.

 

 

 

 




What is an Application Delivery Controller - Part II

Posted in f5, big-ip, availability, adc, load balance, application delivery, devcentral, infrastructure by psilva on January 26th, 2017

devcentral_basics_article_banner.png

Application Delivery Basics

One of the unfortunate effects of the continued evolution of the load balancer into today's application delivery controller (ADC) is that it is often too easy to forget the basic problem for which load balancers were originally created—producing highly available, scalable, and predictable application services. We get too lost in the realm of intelligent application routing, virtualized application services, and shared infrastructure deployments to remember that none of these things are possible without a firm basis in basic load balancing technology. So how important is load balancing, and how do its effects lead to streamlined application delivery?

Let’s examine the basic application delivery transaction. The ADC will typically sit in-line between the client and the hosts that provide the services the client wants to use. As with most things in application delivery, this is not a rule, but more of a best practice in a typical deployment. Let's also assume that the ADC is already configured with a virtual server that points to a cluster consisting of two service points. In this deployment scenario, it is common for the hosts to have a return route that points back to the load balancer so that return traffic will be processed through it on its way back to the client.

The basic application delivery transaction is as follows:

  1. The client attempts to connect with the service on the ADC.
  2. The ADC accepts the connection, and after deciding which host should receive the connection, changes the destination IP (and possibly port) to match the service of the selected host (note that the source IP of the client is not touched).
  3. The host accepts the connection and responds back to the original source, the client, via its default route, the load balancer.
  4. The ADC intercepts the return packet from the host and now changes the source IP (and possible port) to match the virtual server IP and port, and forwards the packet back to the client.
  5. The client receives the return packet, believing that it came from the virtual server or host, and continues the process.

how_lb_works.png

 

Figure 1. A basic load balancing transaction.

This very simple example is relatively straightforward, but there are a couple of key elements to take note of. First, as far as the client knows, it sends packets to the virtual server and the virtual server responds—simple. Second, the NAT takes place. This is where the ADC replaces the destination IP sent by the client (of the virtual server) with the destination IP of the host to which it has chosen to load balance the request. Step three is the second half of this process (the part that makes the NAT "bi-directional"). The source IP of the return packet from the host will be the IP of the host; if this address were not changed and the packet was simply forwarded to the client, the client would be receiving a packet from someone it didn't request one from, and would simply drop it. Instead, the ADC, remembering the connection, rewrites the packet so that the source IP is that of the virtual server, thus solving this problem.

The Application Delivery Decision

So, how does the ADC decide which host to send the connection to? And what happens if the selected host isn't working?

Let's discuss the second question first. What happens if the selected host isn't working? The simple answer is that it doesn't respond to the client request and the connection attempt eventually times out and fails. This is obviously not a preferred circumstance, as it doesn't ensure high availability. That's why most ADC technology includes some level of health monitoring that determines whether a host is actually available before attempting to send connections to it.

There are multiple levels of health monitoring, each with increasing granularity and focus. A basic monitor would simply PING the host itself. If the host does not respond to PING, it is a good assumption that any services defined on the host are probably down and should be removed from the cluster of available services. Unfortunately, even if the host responds to PING, it doesn't necessarily mean the service itself is working. Therefore most devices can do "service PINGs" of some kind, ranging from simple TCP connections all the way to interacting with the application via a scripted or intelligent interaction. These higher-level health monitors not only provide greater confidence in the availability of the actual services (as opposed to the host), but they also allow the load balancer to differentiate between multiple services on a single host. The ADC understands that while one service might be unavailable, other services on the same host might be working just fine and should still be considered as valid destinations for user traffic.

This brings us back to the first question: How does the ADC decide which host to send a connection request to? Each virtual server has a specific dedicated cluster of services (listing the hosts that offer that service) which makes up the list of possibilities. Additionally, the health monitoring modifies that list to make a list of "currently available" hosts that provide the indicated service. It is this modified list from which the ADC chooses the host that will receive a new connection. Deciding the exact host depends on the ADC algorithm associated with that particular cluster. The most common is simple round-robin where the ADC simply goes down the list starting at the top and allocates each new connection to the next host; when it reaches the bottom of the list, it simply starts again at the top. While this is simple and very predictable, it assumes that all connections will have a similar load and duration on the back-end host, which is not always true. More advanced algorithms use things like current-connection counts, host utilization, and even real-world response times for existing traffic to the host in order to pick the most appropriate host from the available cluster services.

Sufficiently advanced application delivery systems will also be able to synthesize health monitoring information with load balancing algorithms to include an understanding of service dependency. This is the case when a single host has multiple services, all of which are necessary to complete the user's request. A common example would be in e-commerce situations where a single host will provide both standard HTTP services (port 80) as well as HTTPS (SSL/TLS at port 443) and any other potential service ports that need to be allowed. In many of these circumstances, you don't want a user going to a host that has one service operational, but not the other. In other words, if the HTTPS services should fail on a host, you also want that host's HTTP service to be taken out of the cluster list of available services. This functionality is increasingly important as HTTP-like services become more differentiated with this things like XML and scripting.

To Load Balance or Not to Load Balance?

Load balancing in regards to picking an available service when a client initiates a transaction request is only half of the solution. Once the connection is established, the ADC must keep track of whether the following traffic from that user should be load balanced. There are generally two specific issues with handling follow-on traffic once it has been load balanced: connection maintenance and persistence.

Connection maintenance

If the user is trying to utilize a long-lived TCP connection (telnet, FTP, and more) that doesn't immediately close, the ADC must ensure that multiple data packets carried across that connection do not get load balanced to other available service hosts. This is connection maintenance and requires two key capabilities: 1) the ability to keep track of open connections and the host service they belong to; and 2) the ability to continue to monitor that connection so the connection table can be updated when the connection closes. This is rather standard fare for most ADCs.

Persistence

Increasingly more common, however, is when the client uses multiple short-lived TCP connections (for example, HTTP) to accomplish a single task. In some cases, like standard web browsing, it doesn't matter and each new request can go to any of the back-end service hosts; however, there are many more instances (XML, JavaScript, e-commerce "shopping cart," HTTPS, and so on) where it is extremely important that multiple connections from the same user go to the same back-end service host and not be load balanced. This concept is called persistence, or server affinity. There are multiple ways to address this depending on the protocol and the desired results. For example, in modern HTTP transactions, the server can specify a "keep-alive" connection, which turns those multiple short-lived connections into a single long-lived connection that can be handled just like the other long-lived connections. However, this provides little relief. Even worse, as the use of web and mobile services increases, keeping all of these connections open longer than necessary would strain the resources of the entire system. In these cases, most ADCs provide other mechanisms for creating artificial server affinity.

One of the most basic forms of persistence is source-address affinity. Source address affinity persistence directs session requests to the same server based solely on the source IP address of a packet. This involves simply recording the source IP address of incoming requests and the service host they were load balanced to, and making all future transaction go to the same host. This is also an easy way to deal with application dependency as it can be applied across all virtual servers and all services. In practice however, the wide-spread use of proxy servers on the Internet and internally in enterprise networks renders this form of persistence almost useless; in theory it works, but proxy-servers inherently hide many users behind a single IP address resulting in none of those users being load balanced after the first user's request—essentially nullifying the ADC capability. Today, the intelligence of ADCs allows organizations to actually open up the data packets and create persistence tables for virtually anything within it. This enables them to use much more unique and identifiable information, such as user name, to maintain persistence. However, organizations one must take care to ensure that this identifiable client information will be present in every request made, as any packets without it will not be persisted and will be load balanced again, most likely breaking the application.

Final Thoughts

It is important to understand that basic load balancing technology, while still in use, is now only considered a feature of Application Delivery Controllers. ADCs evolved from the first load balancers through the service virtualization process and today with software only virtual editions. They can not only improve availability, but also affect the security and performance of the application services being requested.

Today, most organizations realize that simply being able to reach an application doesn't make it usable; and unusable applications mean wasted time and money for the enterprise deploying them. ADCs enable organizations to consolidate network-based services like SSL/TLS offload, caching, compression, rate-shaping, intrusion detection, application firewalls, and even remote access into a single strategic point that can be shared and reused across all application services and all hosts to create a virtualized Application Delivery Network. Basic load balancing is the foundation without which none of the enhanced functionality of today's ADCs would be possible.

And if you missed What is an ADC Part 1, you can find it here.

ps

Next Steps

Now that you’ve gotten this far, would you like to dig deeper or learn more about how application delivery works? Cool, then check out these resources:

 




What is Load Balancing?

Posted in f5, big-ip, adc, load balance, application delivery, devcentral, infrastructure by psilva on January 23rd, 2017

devcentral_basics_article_banner.png

The entire intent of load balancing is to create a system that virtualizes the "service" from the physical servers that actually run that service. A more basic definition is to balance the load across a bunch of physical servers and make those servers look like one great big server to the outside world. There are many reasons to do this, but the primary drivers can be summarized as "scalability," "high availability," and "predictability."

Scalability is the capability of dynamically, or easily, adapting to increased load without impacting existing performance. Service virtualization presented an interesting opportunity for scalability; if the service, or the point of user contact, was separated from the actual servers, scaling of the application would simply mean adding more servers or cloud resources which would not be visible to the end user.

High Availability (HA) is the capability of a site to remain available and accessible even during the failure of one or more systems. Service virtualization also presented an opportunity for HA; if the point of user contact was separated from the actual servers, the failure of an individual server would not render the entire application unavailable. Predictability is a little less clear as it represents pieces of HA as well as some lessons learned along the way. However, predictability can best be described as the capability of having confidence and control in how the services are being delivered and when they are being delivered in regards to availability, performance, and so on.

A Little Background

Back in the early days of the commercial Internet, many would-be dot-com millionaires discovered a serious problem in their plans. Mainframes didn't have web server software (not until the AS/400e, anyway) and even if they did, they couldn't afford them on their start-up budgets. What they could afford was standard, off-the-shelf server hardware from one of the ubiquitous PC manufacturers. The problem for most of them? There was no way that a single PC-based server was ever going to handle the amount of traffic their idea would generate and if it went down, they were offline and out of business. Fortunately, some of those folks actually had plans to make their millions by solving that particular problem; thus was born the load balancing market.

In the Beginning, There Was DNS

Before there were any commercially available, purpose-built load balancing devices, there were many attempts to utilize existing technology to achieve the goals of scalability and HA. The most prevalent, and still used, technology was DNS round-robin. Domain name system (DNS) is the service that translates human-readable names (www.example.com) into machine recognized IP addresses. DNS also provided a way in which each request for name resolution could be answered with multiple IP addresses in different order.

dns_lb1.png

 

Figure 1: Basic DNS response for redundancy

The first time a user requested resolution for www.example.com, the DNS server would hand back multiple addresses (one for each server that hosted the application) in order, say 1, 2, and 3. The next time, the DNS server would give back the same addresses, but this time as 2, 3, and 1. This solution was simple and provided the basic characteristics of what customer were looking for by distributing users sequentially across multiple physical machines using the name as the virtualization point.

From a scalability standpoint, this solution worked remarkable well; probably the reason why derivatives of this method are still in use today particularly in regards to global load balancing or the distribution of load to different service points around the world. As the service needed to grow, all the business owner needed to do was add a new server, include its IP address in the DNS records, and voila, increased capacity. One note, however, is that DNS responses do have a maximum length that is typically allowed, so there is a potential to outgrow or scale beyond this solution.

This solution did little to improve HA. First off, DNS has no capability of knowing if the servers listed are actually working or not, so if a server became unavailable and a user tried to access it before the DNS administrators knew of the failure and removed it from the DNS list, they might get an IP address for a server that didn't work. 

Proprietary Load Balancing in Software

One of the first purpose-built solutions to the load balancing problem was the development of load balancing capabilities built directly into the application software or the operating system (OS) of the application server. While there were as many different implementations as there were companies who developed them, most of the solutions revolved around basic network trickery. For example, one such solution had all of the servers in a cluster listen to a "cluster IP" in addition to their own physical IP address.

cluster_lb1.png

Figure 2: Proprietary cluster IP load balancing

When the user attempted to connect to the service, they connected to the cluster IP instead of to the physical IP of the server. Whichever server in the cluster responded to the connection request first would redirect them to a physical IP address (either their own or another system in the cluster) and the service session would start. One of the key benefits of this solution is that the application developers could use a variety of information to determine which physical IP address the client should connect to. For instance, they could have each server in the cluster maintain a count of how many sessions each clustered member was already servicing and have any new requests directed to the least utilized server.

Initially, the scalability of this solution was readily apparent. All you had to do was build a new server, add it to the cluster, and you grew the capacity of your application. Over time, however, the scalability of application-based load balancing came into question. Because the clustered members needed to stay in constant contact with each other concerning who the next connection should go to, the network traffic between the clustered members increased exponentially with each new server added to the cluster. The scalability was great as long as you didn't need to exceed a small number of servers.

HA was dramatically increased with these solutions. However, since each iteration of intelligence-enabling HA characteristics had a corresponding server and network utilization impact, this also limited scalability. The other negative HA impact was in the realm of reliability. 

Network-Based Load balancing Hardware

The second iteration of purpose-built load balancing came about as network-based appliances. These are the true founding fathers of today's Application Delivery Controllers. Because these boxes were application-neutral and resided outside of the application servers themselves, they could achieve their load balancing using much more straight-forward network techniques. In essence, these devices would present a virtual server address to the outside world and when users attempted to connect, it would forward the connection on the most appropriate real server doing bi-directional network address translation (NAT).

network_lb_1_.png

Figure 3: Load balancing with network-based hardware

The load balancer could control exactly which server received which connection and employed "health monitors" of increasing complexity to ensure that the application server (a real, physical server) was responding as needed; if not, it would automatically stop sending traffic to that server until it produced the desired response (indicating that the server was functioning properly). Although the health monitors were rarely as comprehensive as the ones built by the application developers themselves, the network-based hardware approach could provide at least basic load balancing services to nearly every application in a uniform, consistent manner—finally creating a truly virtualized service entry point unique to the application servers serving it.

Scalability with this solution was only limited by the throughput of the load balancing equipment and the networks attached to it. It was not uncommon for organization replacing software-based load balancing with a hardware-based solution to see a dramatic drop in the utilization of their servers. HA was also dramatically reinforced with a hardware-based solution. Predictability was a core component added by the network-based load balancing hardware since it was much easier to predict where a new connection would be directed and much easier to manipulate. 

The advent of the network-based load balancer ushered in a whole new era in the architecture of applications. HA discussions that once revolved around "uptime" quickly became arguments about the meaning of "available" (if a user has to wait 30 seconds for a response, is it available? What about one minute?). 

This is the basis from which Application Delivery Controllers (ADCs) originated.

The ADC

Simply put, ADCs are what all good load balancers grew up to be. While most ADC conversations rarely mention load balancing, without the capabilities of the network-based hardware load balancer, they would be unable to affect application delivery at all. Today, we talk about security, availability, and performance, but the underlying load balancing technology is critical to the execution of all.

ps

Next Steps

Ready to plunge into the next level of Load Balancing? Take a peek at these resources:

 

 

 

 

 

 




Lightboard Lessons: What is MQTT?

Posted in f5, big-ip, lightboard, devcentral, infrastructure, mqtt, m2m, iot, sensors by psilva on January 11th, 2017

The mad dash to connect virtually every noun to the internet or The Internet of Things, is creating a massive M2M network for all the devices, systems, sensors and actuators to connect & communicate on the Internet.

With that, they need a communications protocol to understand each other. One of those is Message Queue Telemetry Transport (MQTT). MQTT is a “subscribe and publish” messaging protocol designed for lightweight machine-to-machine (or IoT) communications.

In this episode of Lightboard Lessons, I light up how MQTT works.

 

Watch Now:



Blog Roll 2016

Posted in security, f5, big-ip, cloud computing, silva, application delivery, devcentral, infrastructure, access, iot by psilva on December 20th, 2016

dc-logo.jpgIt’s that time of year when we gift and re-gift, just like this text from last year. And the perfect opportunity to re-post, re-purpose and re-use all my 2016 entries.

After 12 years at F5, I had a bit of a transition in 2016, joining the amazing DevCentral team in February as a Sr. Solution Developer. You may have noticed a much more technical bent since then…hopefully. We completed our 101 Certification Exam this year and will be shooting for the 201 next quarter. We started highlighting our community with Featured Member spotlight articles and I finally started contributing to the awesome LightBoard Lessons series. I also had ACDF surgery this year, which is why November is so light. Thanks to the team for all their support this year. You guys are the best!

If you missed any of the 53 attempts including 7 videos, here they are wrapped in one simple entry. I read somewhere that lists in articles are good. I broke it out by month to see what was happening at the time and let's be honest, pure self-promotion. I truly appreciate the reading and watching throughout 2016.

Have a Safe and Happy New Year!

 

January

February

March

April

May

June

July

August

September

October

November

December

 

And a couple special holiday themed entries from years past.

ps

Related

 




The Top 10, Top 10 Predictions for 2017

2017.jpgThe time of year when crystal balls get a viewing and many pundits put out their annual predictions for the coming year. Rather than thinking up my own, I figured I’d regurgitate what many others are expecting to happen.

8 Predictions About How the Security Industry Will Fare in 2017 – An eWeek slideshow looking at areas like IoT, ransomware, automated attacks and the security skills shortage in the industry. Chris Preimesberger (@editingwhiz), who does a monthly #eweekchat on twitter, covers many of the worries facing organizations.

10 IoT Predictions for 2017 – IoT was my number 1 in The Top 10, Top 10 Predictions for 2016 and no doubt, IoT will continue to cause havoc. People focus so much on the ‘things’ themselves rather than the risk of an internet connection. This list discusses how IoT will grow up in 2017, how having a service component will be key, the complete mess of standards and simply, ‘just because you can connect something to the Internet doesn’t mean that you should.’

10 Cloud Computing Trends to Watch in 2017 - Talkin' Cloud posts Forrester’s list of cloud computing predictions for 2017 including how hyperconverged infrastructures will help private clouds get real, ways to make cloud migration easier, the importance (or not) of megaclouds, that hybrid cloud networking will remain the weakest link in the hybrid cloud and that, finally, cloud service providers will design security into their offerings. What a novel idea.

2017 Breach Predictions: The big one is inevitable – While not a list, per se, NetworkWorld talks about how we’ll see more intricate, complex and undetected data integrity attacks and for two main reasons: financial gain and/or political manipulation. Political manipulation? No, that’ll never happen. NW talks about how cyber attacks will get worse due to IoT and gives some ideas on how to protect your data in 2017.

Catastrophic botnet to smash social media networks in 2017 – At the halfway point the Mirai botnet rears its ugly head and ZDNet explains how Mirai is far from the end of social media disruption due to botnets. With botnets-for-hire now available, there will be a significant uptick in social media botnets which aim not only to disrupt but also to earn money for their operators in 2017. Splendid.

Torrid Networks’ Top 10 Cyber Security Predictions For 2017Dhruv Soi looks at the overall cyber security industry and shares that many security product companies will add machine learning twist to their products and at the same time, there will be next-gen malware with an ability to bypass machine learning algorithms. He also talks about the fast adoption of Blockchain, the shift towards mobile exploitation and the increase of cyber insurance in 2017.

Fortinet 2017 Cybersecurity Predictions: Accountability Takes the Stage - Derek Manky goes in depth with this detailed article covering things like how IoT manufacturers will be held accountable for security breaches, how attackers will begin to turn up the heat in smart cities and if technology can close the gap on the critical cyber skills shortage. Each of his 6 predictions include a detailed description along with risks and potential solutions.

2017 security predictions – CIO always has a year-end prediction list and this year doesn’t disappoint. Rather than reviewing the obvious, they focus on things like Dwell time, or the interval between a successful attack and its discovery by the victim. In some cases, dwell times can reach as high as two years! They also detail how passwords will eventually grow up, how the security blame game will heat up and how mobile payments, too, will become a liability. Little different take and a good read.

Predictions for DevOps in 2017 – I’d be remiss if I didn’t include some prognosis about DevOps - one of the most misunderstood terms and functions of late. For DevOps, they will start to include security as part of development instead of an afterthought, we’ll see an increase in the popularity of containerization solutions and DZone sees DevOps principals moving to mainstream enterprise rather than one-off projects.

10 top holiday phishing scams – While many of the lists are forward-looking into the New Year, this one dives into the risks of the year end. Holiday shopping. A good list of holiday threats to watch out for including fake purchase invoices, scam email deals, fake surveys and shipping status malware messages begging you to click the link. Some advice: Don’t!

Bonus Prediction!

Top 10 Most Popular Robots to Buy in 2017 – All kinds of robots are now entering our homes and appearing in society. From vacuums to automated cars to drones to digital assistants, robots are interacting with us more than ever. While many are for home use, some also help with the disabled or help those suffering from various ailments like autism, a stroke or even a missing limb. They go by many monikers like Asimo, Spot, Moley, Pepper, Jibo and Milo to name a few.

Are you ready for 2017?

If you want to see if any of the previous year’s prognoses came true, here ya go:

ps




I’m Sorry Sir, You’re Obsolete

Is the rate of obsolescence proportionate to the rate of technologyadvances?

ihome.jpg

Afew years ago, those little iHome alarm clocksstarted to appear in hotel rooms. Cool gadgets that you could mount your mobilephone to battery charge or play the music on the device. We also had a few in ourhome. They worked perfectly for the iPhone4 since the connector was that 1 inchprotruding plug. When I got the iPhone6, those clocks instantly became useless.Obsolete. At least the phone connector part lost its value.

I’ve been thinking about this for a while.

The rate of obsolescence. The state when an object,technology, service or practice is no longer needed or wanted…even though itstill may be in good working order. E-waste is the fastestgrowing segment of the waste stream. With the technological advances, notonly are we buying the latest and greatest electronics but we’re also dumpingperfectly good, working devices at silly rates. There was even a story about a CentralPark mugger who rejected a flip phone during a heist.

Sure, the new gadget is shiny, faster, better or does stuff the other onecouldn’t. All commercial things have the typical emerging, growth, maturity anddecline model and I started wondering if the rate of obsolescence isproportionate to the rate of technology advances.

Moore’s Law and Wright’s Law are generally regarded as the bestformulas for predicting how rapidly technology will advance. They offerapproximations of the pace of technological progress. Moore’s Law (1965)describes the rate of improvement in the power of computer chips –essentially, the number of components doubles every 18 months. Generally,the principle can be applied to any technology and says that, depending on thetechnology, the rate of improvement will increase exponentially over time.

Wright’s Law (1936),says that progress increases with experience. Meaning that each percentincrease in cumulative production (in a given industry) results in a fixedpercentage improvement in production efficiency.

A simple web search of ‘rate of technological advancement’returns scores of images that show a huge ramp going up.

rate_of_advancement.jpg

But is there the same rapid decline chart for ‘out of date, lostfreshness’ technologies gone by?

Nothing with a laptop falling off a cliff but there are certainly chartsshowing the rate of e-waste.

e-waste-management-17-728.jpg

The climb is not as dramatic as technology advances (yet) but itis still growing rapidly.

So there doesn’t seem to be (or I simply can’t find it) a direct correlationor chart that incorporates both technology advances and resulting obsoleteness.There are plenty of articles that do cover thingsthat will be obsolete in the next few years (DVD players, landlines, clockradios); the jobsthat will be obsolete (travel agent, taxi driver); and the things thatbecame obsoleteover the last decade.

There is a patent, US7949581B2, which describes a method of determining an obsolescence rate of atechnology yet that looks more at the life of a technology patent and itseventual decay and depreciation rate. Less citations over the years means patentdecay. This is more about the depreciation of a specific patent rather than howsociety embraces and then ultimately tosses the technology.

The funny thing is that nowadays vintage items and antiquesseem to be hot markets. Nostalgia is a big seller. Longing for the simplertimes I guess.

And lastly, the rate of WorldIQ over time. Is there a connection with technology?

world_IQ_over_time.png

If you feel your infrastructure is becoming obsolete with all thatcloudy talk, F5 cancertainly help by providing the critical application delivery servicesconsistently across all your data centers - private clouds, publicclouds, and hybrid deployments - so you can enjoy the same availability,security and performance you've come to expect.

ps

Related:

 

E-waste image courtesy: www.slideshare.net/SuharshHarsha

World IQ image courtesy: http://uhaweb.hartford.edu/BRBAKER/





« Older episodes ·