Archive for adc

Device Discovery on BIG-IQ 5.1

Posted in f5, big-ip, cloud computing, adc, application delivery, devcentral, aws, azure, access, big-iq by psilva on May 23rd, 2017

The first step in using a BIG-IQ to manage BIG-IP devices

BIG-IQ enables administrators to centrally manage BIG-IP infrastructure across the IT landscape.  BIG-IQ discovers, tracks, manages, and monitors physical and virtual BIG-IP devices - in the cloud, on premise, or co-located at your preferred datacenter.

Let’s look at how to get BIG-IQ 5.1 to gather the information needed to start managing a BIG-IP device. This gathering process is called Device Discovery.

To get started, the first thing is to logon to the BIG-IQ

iq2.jpg

Once in, the first thing you do is let the BIG-IQ know about the BIG-IP device that you want to manage. Here, in Device Management>Inventory>BIG-IP Devices, we’ll click Add Device.

iq3.jpg

Here we’ll need the IP address, user name and password of the device you want to manage. If the device you want to manage is part of a BIG-IP Device Service Cluster (DSC), you’ll probably want to manage that part of its configuration by adding it to a DSC group on the BIG-IQ. After selecting a DSC, tell the BIG-IQ how to handle synchronization when you deploy configuration changes so that when you deploy changes to one device, the other DSC members get the same changes. Best practice is to let BIG-IQ do the sync.

iq5.jpg

Next click Add at the bottom of the page to start the discovery process.

iq6.jpg

Once the device recognizes your credentials, it’ll prompt you to choose the services that you want to manage. You always select LTM, even if you only mange other services because the other services depend on LTM. To finish the device discovery task, click Discover.

iq7.jpg

The BIG-IQ gathers the information it needs for each of the services you requested. This first step takes only a few moments while the BIG-IQ discovers your devices. You are done with discovery once the status update reads, Complete import tasks.

iq8.jpg

Now, we need to import the service configurations that the BIG-IQ needs before we can start managing that BIG-IP device. Click the link that says, Complete import tasks.

Next, you’ll begin the process of importing the BIG-IP LTM services for this device. Just like the discovery task, you’ll import LTM first.

Click Import.

iq9.jpg

This could take a little time depending on how many LTM objects are defined on this BIG-IP device. When the import finishes, BIG-IQ will display the date and time of when the operation was completed.

iq91.jpg

Now, we repeat the process for the second service provisioned on this device.

iq92.jpg

Importing an access device like BIG-IP APM is slightly different. Part of the import task is to identify the Access Group that this device uses to share its configuration. Whether you’re adding to an existing or creating a new access group, when you’re done entering the name of the group, click Add to start the import process. Here again, the time to process depends on how many BIG-IP APM configuration objects are defined on the device.

iq93.jpg

When the BIG-IP APM services import finishes and the time completed displays, you can simply click Close to complete the task.

iq94.jpg

You can now see that the device has been added to BIG-IQ.

iq95.jpg

That’s it! Now you can start managing the BIG-IP LTM and APM object on this device. For this article, we only imported LTM and APM objects but the process is the same for all services you manage.

Thanks to our TechPubs group and watch the video demo here.

ps

Related:

What is BIG-IQ




High Availability Groups on BIG-IP

Posted in f5, big-ip, availability, adc, application delivery by psilva on April 18th, 2017

 

High Availability of applications is critical to an organization’s survival.

On BIG-IP, HA Groups is a feature that allows BIG-IP to fail over automatically based not on the health of the BIG-IP system itself but rather on the health of external resources within a traffic group. These external resources include the health and availability of pool members, trunk links, VIPRION cluster members or a combination of all three. This is the only cause of failover that is triggered based on resources outside of the BIG-IP.

An HA group is a configuration object you create and assign to a traffic group for devices in a device group. An HA group defines health criteria for a resource (such as an application server pool) that the traffic group uses. With an HA group, the BIG-IP system can decide whether to keep a traffic group active on its current device or fail over the traffic group to another device when resources such as pool members fall below a certain level.

In this scenario, there are three BIG-IP Devices – A, B, C and each device has two traffic groups on it.

ha1.jpg

 

 

As you can see, for BIG-IP A, traffic-group 1 is active. For BIG-IP B, traffic-group 2 is active and for BIG-IP C, both traffic groups are in a standby state. Attached to traffic-group 1 on BIG-IP A is an HA group which specifies that there needs to be a minimum of 3 pool members out of 4 to be up for traffic-group-1 to remain active on BIG-IP A. Similarly, on BIG-IP B the traffic-group needs a minimum of 3 pool members up out of 4 for this traffic group to stay active on BIG-IP B.

On BIG-IP A, if fewer than 3 members of traffic-group-1 are up, this traffic-group will failover.

So let’s say that 2 pool members go down on BIG-IP A. Traffic-group-1 responds by failing-over to the device (BIG-IP) that has the healthiest pool…which in this case is BIG-IP C.

ha2.jpg

 

Now we see that traffic-group-1 is active on BIG-IP C.

Achieving the ultimate ‘Five Nines’ of web site availability (around 5 minutes of downtime a year) has been a goal of many organizations since the beginning of the internet era. There are several ways to accomplish this but essentially a few principles apply.

  • Eliminate single points of failure by adding redundancy so if one component fails, the entire system still works.
  • Have reliable crossover to the duplicate systems so they are ready when needed.
  • And have the ability to detect failures as they occur so proper action can be taken.

If the first two are in place, hopefully you never see a failure. But if you do, HA Groups can help.

ps

Related:

 

 

 




Lightboard Lessons: Service Consolidation on BIG-IP

Posted in f5, adc, silva, application delivery, lightboard, devcentral, infrastructure, consolidate by psilva on March 29th, 2017

The Consolidation of point devices and services in your datacenter or cloud can help with cost, complexity, efficiency, management, provisioning and troubleshooting your infrastructure and systems.

In this Lightboard Lesson, I light up many of the services you can consolidate on BIG-IP.

ps

 

Watch Now:



What is an Application Delivery Controller - Part II

Posted in f5, big-ip, availability, adc, load balance, application delivery, devcentral, infrastructure by psilva on January 26th, 2017

devcentral_basics_article_banner.png

Application Delivery Basics

One of the unfortunate effects of the continued evolution of the load balancer into today's application delivery controller (ADC) is that it is often too easy to forget the basic problem for which load balancers were originally created—producing highly available, scalable, and predictable application services. We get too lost in the realm of intelligent application routing, virtualized application services, and shared infrastructure deployments to remember that none of these things are possible without a firm basis in basic load balancing technology. So how important is load balancing, and how do its effects lead to streamlined application delivery?

Let’s examine the basic application delivery transaction. The ADC will typically sit in-line between the client and the hosts that provide the services the client wants to use. As with most things in application delivery, this is not a rule, but more of a best practice in a typical deployment. Let's also assume that the ADC is already configured with a virtual server that points to a cluster consisting of two service points. In this deployment scenario, it is common for the hosts to have a return route that points back to the load balancer so that return traffic will be processed through it on its way back to the client.

The basic application delivery transaction is as follows:

  1. The client attempts to connect with the service on the ADC.
  2. The ADC accepts the connection, and after deciding which host should receive the connection, changes the destination IP (and possibly port) to match the service of the selected host (note that the source IP of the client is not touched).
  3. The host accepts the connection and responds back to the original source, the client, via its default route, the load balancer.
  4. The ADC intercepts the return packet from the host and now changes the source IP (and possible port) to match the virtual server IP and port, and forwards the packet back to the client.
  5. The client receives the return packet, believing that it came from the virtual server or host, and continues the process.

how_lb_works.png

 

Figure 1. A basic load balancing transaction.

This very simple example is relatively straightforward, but there are a couple of key elements to take note of. First, as far as the client knows, it sends packets to the virtual server and the virtual server responds—simple. Second, the NAT takes place. This is where the ADC replaces the destination IP sent by the client (of the virtual server) with the destination IP of the host to which it has chosen to load balance the request. Step three is the second half of this process (the part that makes the NAT "bi-directional"). The source IP of the return packet from the host will be the IP of the host; if this address were not changed and the packet was simply forwarded to the client, the client would be receiving a packet from someone it didn't request one from, and would simply drop it. Instead, the ADC, remembering the connection, rewrites the packet so that the source IP is that of the virtual server, thus solving this problem.

The Application Delivery Decision

So, how does the ADC decide which host to send the connection to? And what happens if the selected host isn't working?

Let's discuss the second question first. What happens if the selected host isn't working? The simple answer is that it doesn't respond to the client request and the connection attempt eventually times out and fails. This is obviously not a preferred circumstance, as it doesn't ensure high availability. That's why most ADC technology includes some level of health monitoring that determines whether a host is actually available before attempting to send connections to it.

There are multiple levels of health monitoring, each with increasing granularity and focus. A basic monitor would simply PING the host itself. If the host does not respond to PING, it is a good assumption that any services defined on the host are probably down and should be removed from the cluster of available services. Unfortunately, even if the host responds to PING, it doesn't necessarily mean the service itself is working. Therefore most devices can do "service PINGs" of some kind, ranging from simple TCP connections all the way to interacting with the application via a scripted or intelligent interaction. These higher-level health monitors not only provide greater confidence in the availability of the actual services (as opposed to the host), but they also allow the load balancer to differentiate between multiple services on a single host. The ADC understands that while one service might be unavailable, other services on the same host might be working just fine and should still be considered as valid destinations for user traffic.

This brings us back to the first question: How does the ADC decide which host to send a connection request to? Each virtual server has a specific dedicated cluster of services (listing the hosts that offer that service) which makes up the list of possibilities. Additionally, the health monitoring modifies that list to make a list of "currently available" hosts that provide the indicated service. It is this modified list from which the ADC chooses the host that will receive a new connection. Deciding the exact host depends on the ADC algorithm associated with that particular cluster. The most common is simple round-robin where the ADC simply goes down the list starting at the top and allocates each new connection to the next host; when it reaches the bottom of the list, it simply starts again at the top. While this is simple and very predictable, it assumes that all connections will have a similar load and duration on the back-end host, which is not always true. More advanced algorithms use things like current-connection counts, host utilization, and even real-world response times for existing traffic to the host in order to pick the most appropriate host from the available cluster services.

Sufficiently advanced application delivery systems will also be able to synthesize health monitoring information with load balancing algorithms to include an understanding of service dependency. This is the case when a single host has multiple services, all of which are necessary to complete the user's request. A common example would be in e-commerce situations where a single host will provide both standard HTTP services (port 80) as well as HTTPS (SSL/TLS at port 443) and any other potential service ports that need to be allowed. In many of these circumstances, you don't want a user going to a host that has one service operational, but not the other. In other words, if the HTTPS services should fail on a host, you also want that host's HTTP service to be taken out of the cluster list of available services. This functionality is increasingly important as HTTP-like services become more differentiated with this things like XML and scripting.

To Load Balance or Not to Load Balance?

Load balancing in regards to picking an available service when a client initiates a transaction request is only half of the solution. Once the connection is established, the ADC must keep track of whether the following traffic from that user should be load balanced. There are generally two specific issues with handling follow-on traffic once it has been load balanced: connection maintenance and persistence.

Connection maintenance

If the user is trying to utilize a long-lived TCP connection (telnet, FTP, and more) that doesn't immediately close, the ADC must ensure that multiple data packets carried across that connection do not get load balanced to other available service hosts. This is connection maintenance and requires two key capabilities: 1) the ability to keep track of open connections and the host service they belong to; and 2) the ability to continue to monitor that connection so the connection table can be updated when the connection closes. This is rather standard fare for most ADCs.

Persistence

Increasingly more common, however, is when the client uses multiple short-lived TCP connections (for example, HTTP) to accomplish a single task. In some cases, like standard web browsing, it doesn't matter and each new request can go to any of the back-end service hosts; however, there are many more instances (XML, JavaScript, e-commerce "shopping cart," HTTPS, and so on) where it is extremely important that multiple connections from the same user go to the same back-end service host and not be load balanced. This concept is called persistence, or server affinity. There are multiple ways to address this depending on the protocol and the desired results. For example, in modern HTTP transactions, the server can specify a "keep-alive" connection, which turns those multiple short-lived connections into a single long-lived connection that can be handled just like the other long-lived connections. However, this provides little relief. Even worse, as the use of web and mobile services increases, keeping all of these connections open longer than necessary would strain the resources of the entire system. In these cases, most ADCs provide other mechanisms for creating artificial server affinity.

One of the most basic forms of persistence is source-address affinity. Source address affinity persistence directs session requests to the same server based solely on the source IP address of a packet. This involves simply recording the source IP address of incoming requests and the service host they were load balanced to, and making all future transaction go to the same host. This is also an easy way to deal with application dependency as it can be applied across all virtual servers and all services. In practice however, the wide-spread use of proxy servers on the Internet and internally in enterprise networks renders this form of persistence almost useless; in theory it works, but proxy-servers inherently hide many users behind a single IP address resulting in none of those users being load balanced after the first user's request—essentially nullifying the ADC capability. Today, the intelligence of ADCs allows organizations to actually open up the data packets and create persistence tables for virtually anything within it. This enables them to use much more unique and identifiable information, such as user name, to maintain persistence. However, organizations one must take care to ensure that this identifiable client information will be present in every request made, as any packets without it will not be persisted and will be load balanced again, most likely breaking the application.

Final Thoughts

It is important to understand that basic load balancing technology, while still in use, is now only considered a feature of Application Delivery Controllers. ADCs evolved from the first load balancers through the service virtualization process and today with software only virtual editions. They can not only improve availability, but also affect the security and performance of the application services being requested.

Today, most organizations realize that simply being able to reach an application doesn't make it usable; and unusable applications mean wasted time and money for the enterprise deploying them. ADCs enable organizations to consolidate network-based services like SSL/TLS offload, caching, compression, rate-shaping, intrusion detection, application firewalls, and even remote access into a single strategic point that can be shared and reused across all application services and all hosts to create a virtualized Application Delivery Network. Basic load balancing is the foundation without which none of the enhanced functionality of today's ADCs would be possible.

And if you missed What is an ADC Part 1, you can find it here.

ps

Next Steps

Now that you’ve gotten this far, would you like to dig deeper or learn more about how application delivery works? Cool, then check out these resources:

 




What is an Application Delivery Controller - Part 1

Posted in f5, big-ip, adc, application delivery, devcentral by psilva on January 24th, 2017

devcentral_basics_article_banner.png

A Little History

Application Delivery got its start in the form of network-based load balancing hardware. It is the essential foundation on which Application Delivery Controllers (ADCs) operate. The second iteration of purpose-built load balancing (following application-based proprietary systems) materialized in the form of network-based appliances. These are the true founding fathers of today's ADCs. Because these devices were application-neutral and resided outside of the application servers themselves, they could load balance using straightforward network techniques. In essence, these devices would present a "virtual server" address to the outside world, and when users attempted to connect, they would forward the connection to the most appropriate real server doing bi-directional network address translation (NAT).

network_lb_1_.png

Figure 1: Network-based load balancing appliances.

With the advent of virtualization and cloud computing, the third iteration of ADCs arrived as software delivered virtual editions intended to run on hypervisors. Virtual editions of application delivery services have the same breadth of features as those that run on purpose-built hardware and remove much of the complexity from moving application services between virtual, cloud, and hybrid environments. They allow organizations to quickly and easily spin-up application services in private or public cloud environments.

Basic Application Delivery Terminology

It would certainly help if everyone used the same lexicon; unfortunately, every vendor of load balancing devices (and, in turn, ADCs) seems to use different terminology. With a little explanation, however, the confusion surrounding this issue can easily be alleviated.

Node, Host, Member, and Server

Most ADCs have the concept of a node, host, member, or server; some have all four, but they mean different things. There are two basic concepts that they all try to express. One concept—usually called a node or server—is the idea of the physical or virtual server itself that will receive traffic from the ADC. This is synonymous with the IP address of the physical server and, in the absence of a ADC, would be the IP address that the server name (for example, www.example.com) would resolve to. We will refer to this concept as the host.

The second concept is a member (sometimes, unfortunately, also called a node by some manufacturers). A member is usually a little more defined than a server/node in that it includes the TCP port of the actual application that will be receiving traffic. For instance, a server named www.example.com may resolve to an address of 172.16.1.10, which represents the server/node, and may have an application (a web server) running on TCP port 80, making the member address 172.16.1.10:80. Simply put, the member includes the definition of the application port as well as the IP address of the physical server.  We will refer to this as the service.

Why all the complication? Because the distinction between a physical server and the application services running on it allows the ADC to individually interact with the applications rather than the underlying hardware or hypervisor. A host (172.16.1.10) may have more than one service available (HTTP, FTP, DNS, and so on). By defining each application uniquely (172.16.1.10:80, 172.16.1.10:21, and 172.16.1.10:53), the ADC can apply unique load balancing and health monitoring based on the services instead of the host. However, there are still times when being able to interact with the host (like low-level health monitoring or when taking a server offline for maintenance) is extremely convenient.

Most load balancing-based technology uses some concept to represent the host, or physical server, and another to represent the services available on it— in this case, simply host and services.

Pool, Cluster, and Farm

Load balancing allows organizations to distribute inbound application traffic across multiple back-end destinations, including cloud deployments. It is therefore a necessity to have the concept of a collection of back-end destinations. Clusters, as we will refer to them (also known as pools or farms) are collections of similar services available on any number of hosts. For instance, all services that offer the company web page would be collected into a cluster called "company web page" and all services that offer e-commerce services would be collected into a cluster called "e-commerce."

The key element here is that all systems have a collective object that refers to "all similar services" and makes it easier to work with them as a single unit. This collective object—a cluster—is almost always made up of services, not hosts.

Virtual Server

Although not always the case, today the term virtual server means a server hosting virtual machines. It is important to note that like the definition of services, virtual server usually includes the application port was well as the IP address. The term "virtual service" would be more in keeping with the IP:Port convention; but because most vendors, ADC and Cloud alike use virtual server, this article uses virtual server as well.

Putting It All Together

Putting all of these concepts together makes up the basic steps in load balancing. The ADC presents virtual servers to the outside world. Each virtual server points to a cluster of services that reside on one or more physical hosts.

 

 

adc_parts_1_.png

Figure 2: Application Delivery comprises four basic concepts—virtual servers, clusters, services, and hosts.

While the diagram above may not be representative of a real-world deployment, it does provide the elemental structure for continuing a discussion about application delivery basics.

Next Steps

Read What is Load Balancing? if you haven't already and check out ADC Part II coming January 26!

ps

 




What is Load Balancing?

Posted in f5, big-ip, adc, load balance, application delivery, devcentral, infrastructure by psilva on January 23rd, 2017

devcentral_basics_article_banner.png

The entire intent of load balancing is to create a system that virtualizes the "service" from the physical servers that actually run that service. A more basic definition is to balance the load across a bunch of physical servers and make those servers look like one great big server to the outside world. There are many reasons to do this, but the primary drivers can be summarized as "scalability," "high availability," and "predictability."

Scalability is the capability of dynamically, or easily, adapting to increased load without impacting existing performance. Service virtualization presented an interesting opportunity for scalability; if the service, or the point of user contact, was separated from the actual servers, scaling of the application would simply mean adding more servers or cloud resources which would not be visible to the end user.

High Availability (HA) is the capability of a site to remain available and accessible even during the failure of one or more systems. Service virtualization also presented an opportunity for HA; if the point of user contact was separated from the actual servers, the failure of an individual server would not render the entire application unavailable. Predictability is a little less clear as it represents pieces of HA as well as some lessons learned along the way. However, predictability can best be described as the capability of having confidence and control in how the services are being delivered and when they are being delivered in regards to availability, performance, and so on.

A Little Background

Back in the early days of the commercial Internet, many would-be dot-com millionaires discovered a serious problem in their plans. Mainframes didn't have web server software (not until the AS/400e, anyway) and even if they did, they couldn't afford them on their start-up budgets. What they could afford was standard, off-the-shelf server hardware from one of the ubiquitous PC manufacturers. The problem for most of them? There was no way that a single PC-based server was ever going to handle the amount of traffic their idea would generate and if it went down, they were offline and out of business. Fortunately, some of those folks actually had plans to make their millions by solving that particular problem; thus was born the load balancing market.

In the Beginning, There Was DNS

Before there were any commercially available, purpose-built load balancing devices, there were many attempts to utilize existing technology to achieve the goals of scalability and HA. The most prevalent, and still used, technology was DNS round-robin. Domain name system (DNS) is the service that translates human-readable names (www.example.com) into machine recognized IP addresses. DNS also provided a way in which each request for name resolution could be answered with multiple IP addresses in different order.

dns_lb1.png

 

Figure 1: Basic DNS response for redundancy

The first time a user requested resolution for www.example.com, the DNS server would hand back multiple addresses (one for each server that hosted the application) in order, say 1, 2, and 3. The next time, the DNS server would give back the same addresses, but this time as 2, 3, and 1. This solution was simple and provided the basic characteristics of what customer were looking for by distributing users sequentially across multiple physical machines using the name as the virtualization point.

From a scalability standpoint, this solution worked remarkable well; probably the reason why derivatives of this method are still in use today particularly in regards to global load balancing or the distribution of load to different service points around the world. As the service needed to grow, all the business owner needed to do was add a new server, include its IP address in the DNS records, and voila, increased capacity. One note, however, is that DNS responses do have a maximum length that is typically allowed, so there is a potential to outgrow or scale beyond this solution.

This solution did little to improve HA. First off, DNS has no capability of knowing if the servers listed are actually working or not, so if a server became unavailable and a user tried to access it before the DNS administrators knew of the failure and removed it from the DNS list, they might get an IP address for a server that didn't work. 

Proprietary Load Balancing in Software

One of the first purpose-built solutions to the load balancing problem was the development of load balancing capabilities built directly into the application software or the operating system (OS) of the application server. While there were as many different implementations as there were companies who developed them, most of the solutions revolved around basic network trickery. For example, one such solution had all of the servers in a cluster listen to a "cluster IP" in addition to their own physical IP address.

cluster_lb1.png

Figure 2: Proprietary cluster IP load balancing

When the user attempted to connect to the service, they connected to the cluster IP instead of to the physical IP of the server. Whichever server in the cluster responded to the connection request first would redirect them to a physical IP address (either their own or another system in the cluster) and the service session would start. One of the key benefits of this solution is that the application developers could use a variety of information to determine which physical IP address the client should connect to. For instance, they could have each server in the cluster maintain a count of how many sessions each clustered member was already servicing and have any new requests directed to the least utilized server.

Initially, the scalability of this solution was readily apparent. All you had to do was build a new server, add it to the cluster, and you grew the capacity of your application. Over time, however, the scalability of application-based load balancing came into question. Because the clustered members needed to stay in constant contact with each other concerning who the next connection should go to, the network traffic between the clustered members increased exponentially with each new server added to the cluster. The scalability was great as long as you didn't need to exceed a small number of servers.

HA was dramatically increased with these solutions. However, since each iteration of intelligence-enabling HA characteristics had a corresponding server and network utilization impact, this also limited scalability. The other negative HA impact was in the realm of reliability. 

Network-Based Load balancing Hardware

The second iteration of purpose-built load balancing came about as network-based appliances. These are the true founding fathers of today's Application Delivery Controllers. Because these boxes were application-neutral and resided outside of the application servers themselves, they could achieve their load balancing using much more straight-forward network techniques. In essence, these devices would present a virtual server address to the outside world and when users attempted to connect, it would forward the connection on the most appropriate real server doing bi-directional network address translation (NAT).

network_lb_1_.png

Figure 3: Load balancing with network-based hardware

The load balancer could control exactly which server received which connection and employed "health monitors" of increasing complexity to ensure that the application server (a real, physical server) was responding as needed; if not, it would automatically stop sending traffic to that server until it produced the desired response (indicating that the server was functioning properly). Although the health monitors were rarely as comprehensive as the ones built by the application developers themselves, the network-based hardware approach could provide at least basic load balancing services to nearly every application in a uniform, consistent manner—finally creating a truly virtualized service entry point unique to the application servers serving it.

Scalability with this solution was only limited by the throughput of the load balancing equipment and the networks attached to it. It was not uncommon for organization replacing software-based load balancing with a hardware-based solution to see a dramatic drop in the utilization of their servers. HA was also dramatically reinforced with a hardware-based solution. Predictability was a core component added by the network-based load balancing hardware since it was much easier to predict where a new connection would be directed and much easier to manipulate. 

The advent of the network-based load balancer ushered in a whole new era in the architecture of applications. HA discussions that once revolved around "uptime" quickly became arguments about the meaning of "available" (if a user has to wait 30 seconds for a response, is it available? What about one minute?). 

This is the basis from which Application Delivery Controllers (ADCs) originated.

The ADC

Simply put, ADCs are what all good load balancers grew up to be. While most ADC conversations rarely mention load balancing, without the capabilities of the network-based hardware load balancer, they would be unable to affect application delivery at all. Today, we talk about security, availability, and performance, but the underlying load balancing technology is critical to the execution of all.

ps

Next Steps

Ready to plunge into the next level of Load Balancing? Take a peek at these resources:

 

 

 

 

 

 




Your SSL Secrets Uncovered

Posted in f5, big-ip, application security, adc, application delivery, access by psilva on December 9th, 2016

Get Started with SSL Orchestrator

SSL and its brethren TLS is becoming more prevalent to secure IP communications on the internet. It’s not just financial, health care or other sensitive sites, even search engines routinely use the encryption protocol. This can be good or bad. Good, in that all communications are scrambled from prying eyes but potentially hazardous if attackers are hiding malware inside encrypted traffic. If the traffic is encrypted and simply passed through, inspection engines are unable to intercept that traffic for a closer look like they can with clear text communications. The entire ‘defense-in-depth’ strategy with IPS systems and NGFWs lose effectiveness.

F5 BIG-IP can solve these SSL/TSL challenges with an advanced threat protection system that enables organizations to decrypt encrypted traffic within the enterprise boundaries, send to an inspection engine, and gain visibility into outbound encrypted communications to identify and block zero-day exploits. In this case, only the interesting traffic is decrypted for inspection, not all of the wire traffic, thereby conserving processing resources of the inspecting device. You can dynamically chain services based on a context-based policy to efficiently deploy security.

This solution is supported across the existing F5 BIG-IP v12 family of products with F5 SSL Orchestrator and is integrated with such solutions like FireEye NX, Cisco ASA FirePOWER and Symantec DLP.

Here I’ll show you how to complete the initial setup.

A few things to know prior – from a licensing perspective, The F5 SSL visibility solution can be deployed using either the BIG-IP system or the purpose built SSL Orchestrator platform. Both have same SSL intercept capabilities with different licensing requirements.

To deploy using BIG-IP, you’ll need BIG-IP LTM for SSL offload, traffic steering, and load balancing and the SSL forward proxy for outbound SSL visibility. Optionally, you can also consider the URL filtering subscription to enforce corporate web use policies and/or the IP Intelligence subscription for reputation based web blocking. For the purpose built solution, all you’ll need is the F5 Security SSL Orchestrator hardware appliance.

The initial setup addresses URL filtering, SSL bypass, and the F5 iApps template.

URL filtering allows you to select specific URL categories that should bypass SSL decryption. Normally this is done for concerns over user privacy or for categories that contain items (such as software update tools) that may rely on specific SSL certificates to be presented as part of a verification process.

Before configuring URL filtering, we recommend updating the URL database. This must be performed from the BIG-IP system command line. Make sure you can reach download.websense.com on port 80 via the BIG-IP system and from the BIG-IP LTM command line, type the following commands:

modify sys url-db download-schedule urldb download-now false modify sys url-db

download-schedule urldb download-now true

To list all the supported URL categories by the BIG-IP system, run the following command:

tmsh list sys url-db url-category | grep url-category

Next, you’ll want to configure data groups for SSL bypass. You can choose to exempt SSL offloading based on various parameters like source IP address, destination IP address, subnet, hostname, protocol, URL category, IP intelligence category, and IP geolocation. This is achieved by configuring the SSL bypass in the iApps template calling the data groups in the TCP service chain classifier rules. A data group is a simple group of related elements, represented as key value pairs. The following example provides configuration steps for creating a URL category data group to bypass HTTPS traffic of financial websites.

ssl1.png

ssl2.png

For the BIG-IP system deployment, download the latest release of the iApps template and import to the BIG-IP system.

Extract (unzip) the ssl-intercept-12.1.0-1.5.7.zip template (or any newer version available) and follow the steps to import to the BIG-IP web configuration utility.

ssl3.png

From there, you’ll configure your unique inspection engine along with simply following the BIG-IP admin UI with the iApp questionnaire. You’ll need to select and/or fill in different values in the wizard to enable the SSL orchestration functionality. We have deployment guides for the detailed specifics and from there, you’ll be able to send your now unencrypted traffic to your inspection engine for a more secure network.

ps

Resources:




Q/A with Secure-24’s Josh Becigneul - DevCentral’s Featured Member for September

Posted in f5, big-ip, adc, interview, silva, application delivery, devcentral by psilva on September 6th, 2016

Josh.jpg

Josh Becigneul is the ADC Engineer for Secure-24 and DevCentral’s Featured Member for September!

Josh has been working in the IT industry in various positions for a little over 10 years. He’s moved through various disciplines including MS server administration, Linux, Networking, and now has been working primarily with F5 BIG-IPs. For the past 3 years he has focused on F5’s products and growing a team of engineers to manage them. Secure-24 delivers managed IT operations, application hosting and managed cloud services to enterprises worldwide.

DevCentral got an opportunity to talk with Josh about his work, life and the importance of being F5 Certified.

DevCentral: You’ve been an active contributor to the DevCentral community and wondered what keeps you involved?

Josh Becigneul: DevCentral has helped me greatly over the years as I’ve worked with F5 products, so I feel like it’s worth some of my time to spend both reading posts and helping others in the community. When I started off it helped to be able to explain a need and have someone create a basic iRule, or point me towards documentation explaining something. Now that my skills have grown, I want to pay it forward.

DC: Tell us a little about the areas of BIG-IP expertise you have.

JB: I started off on just BIG-IP LTM but over the years have grown into managing APM, GTM, ASM, and sometimes a mix of each. I’ve worked with 1500’s, 1600s, 3600’s, 3900’s and VIPRION. As well as Enterprise Manager and now BIG-IQ too.

DC: You are an ADC Engineer with Secure-24, an application hosting and cloud services organization. Can you explain how DevCentral helps with your daily challenges? Where does BIG-IP fit in the services you offer or within your own infrastructure?

secure24.jpg

JB: At Secure-24, BIG-IP has grown into an essential product for many portions of our organization, along with many of our customers utilizing its services to deliver their applications. We’ve got a large number of LTM customers, APM customers and we’ve been growing into ASM. GTM provides advanced DNS services for many of our customers around the globe. Most deployments using BIG-IP are custom tailored to suit the needs of the particular customer. These can vary from basic load balancing to advanced content steering, or small deployments of a few virtual services to large ones comprised of hundreds.

With the variety of F5 products in use, having a resource like DevCentral is invaluable to our team. From being able to ask my peers questions about things, or utilizing the codeshare and wiki to learn more about iRules and iControl, I couldn’t imagine it not being available.

DC: Describe one of your biggest BIG-IP challenges and how DevCentral helped in that situation.

JB: One of the most useful things iRules allow us to do is virtual hosting; running many services behind a virtual service. Coupling this with APM allowed us to greatly simplify remote access for us and our customers. For several customers, we used APM to migrate them away from MS Forefront.

DC: I understand you are an F5 Certified Professional. Can you tell us about that and why you feel it is beneficial?

JB: Yes, I first became F5 Certified in 2015 with my 201 Certified BIG-IP Administrator, and followed that up at 2016’s F5 Agility conference by obtaining my 304 APM Specialist. I feel it is beneficial because it helps to reinforce what I’ve learned over the years, and (hopefully) lets my customers feel like they are in good hands. (DC: Josh also recently passed the 302 GTM Exam!)

DC: Lastly, if you weren’t an IT admin – what would be your dream job? Or better, when you were a kid – what did you want to be when you grew up?

JB: I’d probably be a roadie, and tour the world doing lights and sound for a huge band!

DC: Thanks Josh and get us backstage passes! Check out all of Josh’s DevCentral contributions, connect on LinkedIn and follow both Josh @vsnine and @secure_24.

And if you'd like to nominate someone to be the DevCentral Featured Member, please send your suggestions to the DevCentral Team!




The Road to F5 Certification

Posted in f5, big-ip, adc, application delivery, devcentral, certification by psilva on July 19th, 2016

dc-logo.jpg

Overthe last 4 months, the DevCentral team has beenpreparing for the F5Certification exam. We’ve met a number of times for group study and foreach session, we reviewed a particular section of the Exam101 - Application Delivery Fundamentals StudyGuide. We prepared and presented a certain topic and had open discussionsabout particular use cases, customer scenarios and even played some guessinggames as to what might be asked on the exam for that section.

Now the time has come to take the test.

Since the DevCentral team will be at Agility2016 in Chicago this year, we decided to take advantage of the Certification Team’s mobiletesting center. While you can certainly go to one of Pearson Vue’s test centers, theCertification Team will be on hand at F5 Agility to administer their variousexams for those looking to get F5 Certified. It’s a pretty cool set up – almostlike a band on a mini regional tour. They’ll have everything you need to takethe test.

I gotta tell you, I’m a little nervous.

f5_cert.jpg

I’msure I’ll be able to nail sections 2-5 since those are the areas I’ve focusedon for the past decade…it’s the first part, OSI, that I’m a little weary. Notthat I don’t know my 7 layers – All People Seem To NeedData Processing – but maybe some of nuances or lack of recentreal world subnetting that concerns me. I’ll use this last month before theexam to keep prepping to make sure I don’t embarrass myself.

But let's look at the stats.

Recently Ken Salchow, F5’s Sr. Manager Professional Certifications, has posted some interesting statisticsabout the program, particularly pass rates and certification by region. Kennotes about the pass rate graph, ‘I am also often asked about exam passrates ... which is not an easy thing to really post. Below is a graph thatshows ALL TIME pass rates by exam. It is important to note that these passrates encompass thousands of exams and even different versions of exams. Assuch, take these with a grain of salt and realize that if I did a 12-monthaverage, 24-month average and last month average, they would all differ fromthe below. Oh ... and have I mentioned how much I distrust data coming from ourcandidate management system?? Yeah ... so ... you've been warned.

And the graph:

pass_rate.jpg

So there's a 70% pass rate on the 101. Fairly decent.

Ken also posted anotherchart which shows the breakdown of certification by region as a percentageof the whole.

cert_region.jpg

Nice mix of global certifications.

We - the DevCentral team - will take some pictures and let you know how wedid. If you are at Agility and taking a Certification exam this year, let'scompare notes for the final wrap. Pass or Fail.

My energy says, 'Success!'

ps

Related:




AWS re:Invent 2013 – Cloud Migration Reference Architecture (feat. Pearce)

Posted in f5, big-ip, cloud, cloud computing, adc, silva, video, application delivery, management, control, cloud vendor by psilva on November 14th, 2013

I meet with Sr. Technical Marketing Manager Nathan to whiteboard his Cloud Migration reference architecture. Instead of a temporary, short term cloud burst, now we discuss migrating applications to the cloud…as in a DevOps scenario.

ps

Related:

 

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]




« Older episodes ·