Archive for load balance

Configure HA Groups on BIG-IP

Posted in f5, big-ip, availability, load balance, application delivery, devcentral by psilva on April 25th, 2017

Last week we talked about how HA Groups work on BIG-IP and this week we’ll look at how to configure HA Groups in BIG-IP.

To recap, an HA group is a configuration object you create and assign to a traffic group for devices in a device group. An HA group defines health criteria for a resource (such as an application server pool) that the traffic group uses. With an HA group, the BIG-IP system can decide whether to keep a traffic group active on its current device or fail over the traffic group to another device when resources such as pool members fall below a certain level.

First, some prerequisites:

  • Basic Setup: Each BIG-IP (v13) is licensed, provisioned and configured to run BIG-IP LTM
  • HA Configuration: All BIG-IP devices are members of a sync-failover device group and synced
  • Each BIG-IP has a unique virtual server with a unique server pool assigned to it
  • All virtual addresses are associated with traffic-group-1

To the BIG-IP GUI!

First you go to System>High Availability>HA Group List>and then click the Create button.

hab1.jpg

 

The first thing is to name the group. Give it a detailed name to indicate the object group type, the device it pertains to and the traffic group it pertains to. In this case, we’ll call it ‘ha_group_deviceA_tg1.’

hab2.jpg

 

Next, we’ll click Add in the Pools area under Health Conditions and add the pool for BIG-IP A to the HA Group which we’ve already created. We then move on to the minimum member count. The minimum member count is members that need to be up for traffic-group-1 to remain active on BIG-IP A. In this case, we want 3 out of 4 members to be up. If that number falls below 3, the BIG-IP will automatically fail the traffic group over to another device in the device group.

hab34.jpg

 

Next is HA Score and this is the sufficient threshold which is the number of up pool members you want to represent a full health score. In this case, we’ll choose 4. So if 4 pool members are up then it is considered to have a full health score. If fewer than 4 members are up, then this health score would be lower. We’ll give it a default weight of 10 since 10 represents the full HA score for BIG-IP A. We’re going to say that all 4 members need to be active in the group in order for BIG-IP to give BIG-IP A an HA score of 10. And we click Add.

hab6.jpg

 

We’ll then see a summary of the health conditions we just specified including the minimum member count and sufficient member count. Then click Create HA Group.

hab7.jpg

 

Next, we go to Device Management>Traffic Groups>and click on traffic-group-1.

hab8.jpg

 

Now, we’ll associate this new HA Group with traffic-group-1. Go to the HA Group setting and select the new HA Group from the drop-down list. And then select the Failover Method to Device with the Best HA Score. Click Save.

hab81.jpg

 

Now we do the same thing for BIG-IP B. So again, go to System>High Availability>HA Group List>and then click the Create button. Give it a special name, click Add in the Pools area and select the pool you’ve already created for BIG-IP B. Again, for our situation, we’ll specify a minimum of 3 members to be up if traffic-group-1 is active on BIG-IP B. This minimum number does not have to be the same as the other HA Group, but it is for this example. Again, a default weight of 10 in the HA Score for all pool members. Click Add and then Create HA Group for BIG-IP B.

hab82.jpg

 

And then, Device Management>Traffic Groups> and click traffic-group-1. Choose BIG-IP B’s HA Group and select the same Failover method as BIG-IP A – Based on HA Score. Click Save.

Lastly, you would create another HA Group on BIG-IP C as we’ve done on BIG-IP A and BIG-IP B. Once that happens, you’ll have the same set up as this:

ha2a.jpg

As you can see, BIG-IP A has lost another pool member causing traffic-group-1 to failover and the BIG-IP software has chosen BIG-IP C as the next active device to host the traffic group because BIG-IP C has the highest HA Score based on the health of its pool.

Thanks to our TechPubs group for the basis of this article and check out a video demo here.

ps

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 




What is an Application Delivery Controller - Part II

Posted in f5, big-ip, availability, adc, load balance, application delivery, devcentral, infrastructure by psilva on January 26th, 2017

devcentral_basics_article_banner.png

Application Delivery Basics

One of the unfortunate effects of the continued evolution of the load balancer into today's application delivery controller (ADC) is that it is often too easy to forget the basic problem for which load balancers were originally created—producing highly available, scalable, and predictable application services. We get too lost in the realm of intelligent application routing, virtualized application services, and shared infrastructure deployments to remember that none of these things are possible without a firm basis in basic load balancing technology. So how important is load balancing, and how do its effects lead to streamlined application delivery?

Let’s examine the basic application delivery transaction. The ADC will typically sit in-line between the client and the hosts that provide the services the client wants to use. As with most things in application delivery, this is not a rule, but more of a best practice in a typical deployment. Let's also assume that the ADC is already configured with a virtual server that points to a cluster consisting of two service points. In this deployment scenario, it is common for the hosts to have a return route that points back to the load balancer so that return traffic will be processed through it on its way back to the client.

The basic application delivery transaction is as follows:

  1. The client attempts to connect with the service on the ADC.
  2. The ADC accepts the connection, and after deciding which host should receive the connection, changes the destination IP (and possibly port) to match the service of the selected host (note that the source IP of the client is not touched).
  3. The host accepts the connection and responds back to the original source, the client, via its default route, the load balancer.
  4. The ADC intercepts the return packet from the host and now changes the source IP (and possible port) to match the virtual server IP and port, and forwards the packet back to the client.
  5. The client receives the return packet, believing that it came from the virtual server or host, and continues the process.

how_lb_works.png

 

Figure 1. A basic load balancing transaction.

This very simple example is relatively straightforward, but there are a couple of key elements to take note of. First, as far as the client knows, it sends packets to the virtual server and the virtual server responds—simple. Second, the NAT takes place. This is where the ADC replaces the destination IP sent by the client (of the virtual server) with the destination IP of the host to which it has chosen to load balance the request. Step three is the second half of this process (the part that makes the NAT "bi-directional"). The source IP of the return packet from the host will be the IP of the host; if this address were not changed and the packet was simply forwarded to the client, the client would be receiving a packet from someone it didn't request one from, and would simply drop it. Instead, the ADC, remembering the connection, rewrites the packet so that the source IP is that of the virtual server, thus solving this problem.

The Application Delivery Decision

So, how does the ADC decide which host to send the connection to? And what happens if the selected host isn't working?

Let's discuss the second question first. What happens if the selected host isn't working? The simple answer is that it doesn't respond to the client request and the connection attempt eventually times out and fails. This is obviously not a preferred circumstance, as it doesn't ensure high availability. That's why most ADC technology includes some level of health monitoring that determines whether a host is actually available before attempting to send connections to it.

There are multiple levels of health monitoring, each with increasing granularity and focus. A basic monitor would simply PING the host itself. If the host does not respond to PING, it is a good assumption that any services defined on the host are probably down and should be removed from the cluster of available services. Unfortunately, even if the host responds to PING, it doesn't necessarily mean the service itself is working. Therefore most devices can do "service PINGs" of some kind, ranging from simple TCP connections all the way to interacting with the application via a scripted or intelligent interaction. These higher-level health monitors not only provide greater confidence in the availability of the actual services (as opposed to the host), but they also allow the load balancer to differentiate between multiple services on a single host. The ADC understands that while one service might be unavailable, other services on the same host might be working just fine and should still be considered as valid destinations for user traffic.

This brings us back to the first question: How does the ADC decide which host to send a connection request to? Each virtual server has a specific dedicated cluster of services (listing the hosts that offer that service) which makes up the list of possibilities. Additionally, the health monitoring modifies that list to make a list of "currently available" hosts that provide the indicated service. It is this modified list from which the ADC chooses the host that will receive a new connection. Deciding the exact host depends on the ADC algorithm associated with that particular cluster. The most common is simple round-robin where the ADC simply goes down the list starting at the top and allocates each new connection to the next host; when it reaches the bottom of the list, it simply starts again at the top. While this is simple and very predictable, it assumes that all connections will have a similar load and duration on the back-end host, which is not always true. More advanced algorithms use things like current-connection counts, host utilization, and even real-world response times for existing traffic to the host in order to pick the most appropriate host from the available cluster services.

Sufficiently advanced application delivery systems will also be able to synthesize health monitoring information with load balancing algorithms to include an understanding of service dependency. This is the case when a single host has multiple services, all of which are necessary to complete the user's request. A common example would be in e-commerce situations where a single host will provide both standard HTTP services (port 80) as well as HTTPS (SSL/TLS at port 443) and any other potential service ports that need to be allowed. In many of these circumstances, you don't want a user going to a host that has one service operational, but not the other. In other words, if the HTTPS services should fail on a host, you also want that host's HTTP service to be taken out of the cluster list of available services. This functionality is increasingly important as HTTP-like services become more differentiated with this things like XML and scripting.

To Load Balance or Not to Load Balance?

Load balancing in regards to picking an available service when a client initiates a transaction request is only half of the solution. Once the connection is established, the ADC must keep track of whether the following traffic from that user should be load balanced. There are generally two specific issues with handling follow-on traffic once it has been load balanced: connection maintenance and persistence.

Connection maintenance

If the user is trying to utilize a long-lived TCP connection (telnet, FTP, and more) that doesn't immediately close, the ADC must ensure that multiple data packets carried across that connection do not get load balanced to other available service hosts. This is connection maintenance and requires two key capabilities: 1) the ability to keep track of open connections and the host service they belong to; and 2) the ability to continue to monitor that connection so the connection table can be updated when the connection closes. This is rather standard fare for most ADCs.

Persistence

Increasingly more common, however, is when the client uses multiple short-lived TCP connections (for example, HTTP) to accomplish a single task. In some cases, like standard web browsing, it doesn't matter and each new request can go to any of the back-end service hosts; however, there are many more instances (XML, JavaScript, e-commerce "shopping cart," HTTPS, and so on) where it is extremely important that multiple connections from the same user go to the same back-end service host and not be load balanced. This concept is called persistence, or server affinity. There are multiple ways to address this depending on the protocol and the desired results. For example, in modern HTTP transactions, the server can specify a "keep-alive" connection, which turns those multiple short-lived connections into a single long-lived connection that can be handled just like the other long-lived connections. However, this provides little relief. Even worse, as the use of web and mobile services increases, keeping all of these connections open longer than necessary would strain the resources of the entire system. In these cases, most ADCs provide other mechanisms for creating artificial server affinity.

One of the most basic forms of persistence is source-address affinity. Source address affinity persistence directs session requests to the same server based solely on the source IP address of a packet. This involves simply recording the source IP address of incoming requests and the service host they were load balanced to, and making all future transaction go to the same host. This is also an easy way to deal with application dependency as it can be applied across all virtual servers and all services. In practice however, the wide-spread use of proxy servers on the Internet and internally in enterprise networks renders this form of persistence almost useless; in theory it works, but proxy-servers inherently hide many users behind a single IP address resulting in none of those users being load balanced after the first user's request—essentially nullifying the ADC capability. Today, the intelligence of ADCs allows organizations to actually open up the data packets and create persistence tables for virtually anything within it. This enables them to use much more unique and identifiable information, such as user name, to maintain persistence. However, organizations one must take care to ensure that this identifiable client information will be present in every request made, as any packets without it will not be persisted and will be load balanced again, most likely breaking the application.

Final Thoughts

It is important to understand that basic load balancing technology, while still in use, is now only considered a feature of Application Delivery Controllers. ADCs evolved from the first load balancers through the service virtualization process and today with software only virtual editions. They can not only improve availability, but also affect the security and performance of the application services being requested.

Today, most organizations realize that simply being able to reach an application doesn't make it usable; and unusable applications mean wasted time and money for the enterprise deploying them. ADCs enable organizations to consolidate network-based services like SSL/TLS offload, caching, compression, rate-shaping, intrusion detection, application firewalls, and even remote access into a single strategic point that can be shared and reused across all application services and all hosts to create a virtualized Application Delivery Network. Basic load balancing is the foundation without which none of the enhanced functionality of today's ADCs would be possible.

And if you missed What is an ADC Part 1, you can find it here.

ps

Next Steps

Now that you’ve gotten this far, would you like to dig deeper or learn more about how application delivery works? Cool, then check out these resources:

 




What is Load Balancing?

Posted in f5, big-ip, adc, load balance, application delivery, devcentral, infrastructure by psilva on January 23rd, 2017

devcentral_basics_article_banner.png

The entire intent of load balancing is to create a system that virtualizes the "service" from the physical servers that actually run that service. A more basic definition is to balance the load across a bunch of physical servers and make those servers look like one great big server to the outside world. There are many reasons to do this, but the primary drivers can be summarized as "scalability," "high availability," and "predictability."

Scalability is the capability of dynamically, or easily, adapting to increased load without impacting existing performance. Service virtualization presented an interesting opportunity for scalability; if the service, or the point of user contact, was separated from the actual servers, scaling of the application would simply mean adding more servers or cloud resources which would not be visible to the end user.

High Availability (HA) is the capability of a site to remain available and accessible even during the failure of one or more systems. Service virtualization also presented an opportunity for HA; if the point of user contact was separated from the actual servers, the failure of an individual server would not render the entire application unavailable. Predictability is a little less clear as it represents pieces of HA as well as some lessons learned along the way. However, predictability can best be described as the capability of having confidence and control in how the services are being delivered and when they are being delivered in regards to availability, performance, and so on.

A Little Background

Back in the early days of the commercial Internet, many would-be dot-com millionaires discovered a serious problem in their plans. Mainframes didn't have web server software (not until the AS/400e, anyway) and even if they did, they couldn't afford them on their start-up budgets. What they could afford was standard, off-the-shelf server hardware from one of the ubiquitous PC manufacturers. The problem for most of them? There was no way that a single PC-based server was ever going to handle the amount of traffic their idea would generate and if it went down, they were offline and out of business. Fortunately, some of those folks actually had plans to make their millions by solving that particular problem; thus was born the load balancing market.

In the Beginning, There Was DNS

Before there were any commercially available, purpose-built load balancing devices, there were many attempts to utilize existing technology to achieve the goals of scalability and HA. The most prevalent, and still used, technology was DNS round-robin. Domain name system (DNS) is the service that translates human-readable names (www.example.com) into machine recognized IP addresses. DNS also provided a way in which each request for name resolution could be answered with multiple IP addresses in different order.

dns_lb1.png

 

Figure 1: Basic DNS response for redundancy

The first time a user requested resolution for www.example.com, the DNS server would hand back multiple addresses (one for each server that hosted the application) in order, say 1, 2, and 3. The next time, the DNS server would give back the same addresses, but this time as 2, 3, and 1. This solution was simple and provided the basic characteristics of what customer were looking for by distributing users sequentially across multiple physical machines using the name as the virtualization point.

From a scalability standpoint, this solution worked remarkable well; probably the reason why derivatives of this method are still in use today particularly in regards to global load balancing or the distribution of load to different service points around the world. As the service needed to grow, all the business owner needed to do was add a new server, include its IP address in the DNS records, and voila, increased capacity. One note, however, is that DNS responses do have a maximum length that is typically allowed, so there is a potential to outgrow or scale beyond this solution.

This solution did little to improve HA. First off, DNS has no capability of knowing if the servers listed are actually working or not, so if a server became unavailable and a user tried to access it before the DNS administrators knew of the failure and removed it from the DNS list, they might get an IP address for a server that didn't work. 

Proprietary Load Balancing in Software

One of the first purpose-built solutions to the load balancing problem was the development of load balancing capabilities built directly into the application software or the operating system (OS) of the application server. While there were as many different implementations as there were companies who developed them, most of the solutions revolved around basic network trickery. For example, one such solution had all of the servers in a cluster listen to a "cluster IP" in addition to their own physical IP address.

cluster_lb1.png

Figure 2: Proprietary cluster IP load balancing

When the user attempted to connect to the service, they connected to the cluster IP instead of to the physical IP of the server. Whichever server in the cluster responded to the connection request first would redirect them to a physical IP address (either their own or another system in the cluster) and the service session would start. One of the key benefits of this solution is that the application developers could use a variety of information to determine which physical IP address the client should connect to. For instance, they could have each server in the cluster maintain a count of how many sessions each clustered member was already servicing and have any new requests directed to the least utilized server.

Initially, the scalability of this solution was readily apparent. All you had to do was build a new server, add it to the cluster, and you grew the capacity of your application. Over time, however, the scalability of application-based load balancing came into question. Because the clustered members needed to stay in constant contact with each other concerning who the next connection should go to, the network traffic between the clustered members increased exponentially with each new server added to the cluster. The scalability was great as long as you didn't need to exceed a small number of servers.

HA was dramatically increased with these solutions. However, since each iteration of intelligence-enabling HA characteristics had a corresponding server and network utilization impact, this also limited scalability. The other negative HA impact was in the realm of reliability. 

Network-Based Load balancing Hardware

The second iteration of purpose-built load balancing came about as network-based appliances. These are the true founding fathers of today's Application Delivery Controllers. Because these boxes were application-neutral and resided outside of the application servers themselves, they could achieve their load balancing using much more straight-forward network techniques. In essence, these devices would present a virtual server address to the outside world and when users attempted to connect, it would forward the connection on the most appropriate real server doing bi-directional network address translation (NAT).

network_lb_1_.png

Figure 3: Load balancing with network-based hardware

The load balancer could control exactly which server received which connection and employed "health monitors" of increasing complexity to ensure that the application server (a real, physical server) was responding as needed; if not, it would automatically stop sending traffic to that server until it produced the desired response (indicating that the server was functioning properly). Although the health monitors were rarely as comprehensive as the ones built by the application developers themselves, the network-based hardware approach could provide at least basic load balancing services to nearly every application in a uniform, consistent manner—finally creating a truly virtualized service entry point unique to the application servers serving it.

Scalability with this solution was only limited by the throughput of the load balancing equipment and the networks attached to it. It was not uncommon for organization replacing software-based load balancing with a hardware-based solution to see a dramatic drop in the utilization of their servers. HA was also dramatically reinforced with a hardware-based solution. Predictability was a core component added by the network-based load balancing hardware since it was much easier to predict where a new connection would be directed and much easier to manipulate. 

The advent of the network-based load balancer ushered in a whole new era in the architecture of applications. HA discussions that once revolved around "uptime" quickly became arguments about the meaning of "available" (if a user has to wait 30 seconds for a response, is it available? What about one minute?). 

This is the basis from which Application Delivery Controllers (ADCs) originated.

The ADC

Simply put, ADCs are what all good load balancers grew up to be. While most ADC conversations rarely mention load balancing, without the capabilities of the network-based hardware load balancer, they would be unable to affect application delivery at all. Today, we talk about security, availability, and performance, but the underlying load balancing technology is critical to the execution of all.

ps

Next Steps

Ready to plunge into the next level of Load Balancing? Take a peek at these resources:

 

 

 

 

 

 




Highly Available Hybrid

Achieving the ultimate ‘Five Nines’ of web site availability (around 5 minutes of downtime a year) has been a goal of many organizations since the beginning of the internet era. There are several ways to accomplish this but essentially a few principles apply.

  • Eliminate single points of failure by adding redundancy so if one component fails, the entire system still works.
  • Have reliable crossover to the duplicate systems so they are ready when needed.
  • And have the ability to detect failures as they occur so proper action can be taken.

If the first two are in place, hopefully you never see a failure but maintenance is a must.

BIG-IP high availability (HA) functionality, such as connection mirroring, configuration synchronization, and network failover, allow core system services to be available for BIG-IP to manage in the event that a particular application instance becomes unavailable. Organizations can synchronize BIG-IP configurations across data centers to ensure the most up to date policy is being enforced throughout the entire infrastructure. In addition, BIG-IP itself can be deployed as a redundant system either in active/standby or active/active mode.

soldiag.jpg

Web applications come in all shapes and sizes from static to dynamic, from simple to complex from specific to general. No matter the size, availability is important to support the customers and the business. The most basic high-availability architecture is the typical 3-tier design. A pair of ADCs in the DMZ terminates the connection; they in turn intelligently distribute the client request to a pool (multiple) of application servers which then query the database servers for the appropriate content. Each tier has redundant servers so in the event of a server outage, the others take the load and the system stays available.

This is a tried and true design for most operations and provides resilient application availability within a typical data center. But fault tolerance between two data centers is even more reliable than multiple servers in a single location, simply because that one data center is a single point of failure.

A hybrid data center approach allows organizations to not only distribute their applications when it makes sense but can also provide global fault tolerance to the system overall. Depending on how an organization’s disaster recovery infrastructure is designed, this can be an active site, a hot-standby, some leased hosting space, a cloud provider or some other contained compute location. As soon as that server, application, or even location starts to have trouble, organizations can seamlessly maneuver around the issue and continue to deliver their applications.

Driven by applications and workloads, a hybrid data center is really a technology strategy of the entire infrastructure mix of on premise and off-premise data compute resources.  IT workloads reside in conventional enterprise IT (legacy systems), an on premise private cloud (mission critical apps), at a third-party off-premise location (managed, hosting or cloud provider) or a combination of all three.

The various combinations of hybrid data center types can be as diverse as the industries that use them. Enterprises probably already have some level of hybrid, even if it is a mix of owned space plus SaaS. Enterprises typically like to keep sensitive assets in house but have started to migrate workloads to hybrid data centers. Financial industries might have different requirements than retail. Startups might start completely with a cloud based service and then begin to build their own facility if needed. Mobile app developers, particularly games, often use the cloud for development and then bring it in-house once it is released. Enterprises, on the other hand, have (historically) developed in house and then pushed out to a data center when ready. The variants of industries, situations and challenges the hybrid approach can address is vast.

Manage services rather than boxes.

ps

Related

 

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]



In 5 Minutes or Less - Enterprise Manager v3.0

In my 21st In 5 Video, I show you some of the new features available in Enterprise Manager v3.0, in 5 Minutes or Less.  We cover the Centralized Analytics Module, iHealth integration and Multi-device configuration comparison.

As a follow-up to release of BIG-IP v11.2, Enterprise Manager v3.0 is now available.  In addition to support of v11.2, this release provides some key new capabilities that will help monitor and understand application performance across all BIG-IP devices.  Simplify and automate the management of your F5 ADN infrastructure.

ps

Resources:

Technorati Tags: F5,big-ip,security,management,infrastructure,big data,cloud,em,enterprise manager, analytics,ihealth,video

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_facebook[1] o_twitter[1] o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]



Audio White Paper - High-Performance DNS Services in BIG-IP Version 11

With an increasing number of devices, applications, and services on the Internet, it’s becoming more difficult to achieve network and application response times that deliver a quality user experience. This problem is not only a bandwidth issue—it’s also closely tied to network and infrastructure architecture.  To provide high-quality user experiences on the Internet, networks must be designed with optimized, secure, highly available, and high-performance IP services. Domain Name System (DNS) is one of the most difficult, but important, IP services to optimize and secure. DNS Services in F5 BIG-IP version 11 provides an intelligent DNS architecture that delivers high performance and scalability while negating the effects of network attacks.  Running Time: 14:17  Read full white paper here.  And click here for more F5 Audio.

ps

Technorati Tags: F5, integration, data center, Pete Silva, security, business, dnssec, technology, application delivery, dns, cloud, gtm, dynamic, web, internet, security, hardware, audio, whitepaper, big-ip

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_facebook[1] o_twitter[1] o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]
00:0000:00



Interop 2011 - TMCNet Interview

This time I’m on the other side as Rich Tehrani, CEO of TMCnet, interviews me at Interop.  We talk about the VIPRION 2400, vCMP, F5 in the Interop NOC, BIG-IP as a strategic point of control and some other market observations.

ps

Resources:

Technorati Tags: F5, interop, Pete Silva, security, business, education, technology, internet, big-ip, VIPRION, vCMP, ixia, performance, ssl tps, testing

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_facebook[1] o_twitter[1] o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]




Lost Your Balance? Drop The Load and Deliver!

It’s not named dough, melted cheese, mushrooms and pepperoni balancing.  Its called Pizza Delivery.  The user makes a request either over the phone or on-line with all the context of the ingredients and specifics of the request.  The Pizza Parlor then confirms the delivery location, gets to work and tries to deliver it to the destination as fast as they can.  The request arrives, both parties validate the order sometimes with a two-person handshake and the user consumes the content that was delivered.  Somewhat similar but much faster is what happens when a user makes a request from a web application.  They type in the location they want to go to, the ADC considers such contextual information like user, IP address, browser type, location and other variables to then deliver the specific content that is being requested – as fast as possible.  It’s not about load balancing an application, it’s about Application Delivery.

If you’ve lost your balance, then your equilibrium might be off and that is not a good thing.  You might have blurred vision, trouble hearing, dizziness and headaches and your decision making process could be off kilter.  You are slow to react, misunderstand requests, and give someone something they didn’t ask for or something different than what they asked for.  You are unable to take requests, process the information load and deliver an answer.

Load balancing an application is no longer sufficient to ensure that the right users are receiving the right information at the right time, quickly, efficiently and securely.  Load balancing almost seems like an afterthought, or late in the process of delivering an application. You need to take into context the various variables of the user request and deliver that application based on the contextual information.  We use contextual information all the time to make our little daily decisions.  Which jacket to wear?  Well, what’s the temperature; is it raining; what am I doing; what’s the forecast; does it have pockets; does it have a hood; is it zipper or pull over and so forth.  Of course all this happens in an instant and we select what is needed.  You can’t make application delivery decisions simply based on ‘next in line,’ those judgments need to consider all the available information to make an informed application delivery decision.

ps

Resources:

Technorati Tags: F5,F5 News,Interop,NOC,IPv6,BIG-IP,application delivery,network,Pete Silva, event, #50waystousebigip,50 Ways,availability,BIG-IP

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_facebook[1] o_twitter[1] o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]




In 5 Minutes or Less Video Series

Posted in security, f5, big-ip, availability, silva, video, load balance, application delivery, fun, management by psilva on February 1st, 2011

In 5 Minutes or Less Video Series promo. I decided to create a quick rundown of all the In 5 Minutes videos that are available.  See them all in 3 minutes…or Less.  ps

Resources:

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_facebook[1] o_twitter[1] o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]

Technorati Tags: F5, In 5 Minutes, integration, big-ip, Pete Silva, security, business, education, technology, application delivery, cloud, context-aware, automation, web, video, blog, APM




Audio White Paper - Application Delivery Hardware A Critical Component

Application Delivery Controllers (ADCs) come in a variety of hardware and software combinations, but mission-critical application delivery demands mission-critical ADC hardware.  Running Time: 17:58  Read full white paper here.  And click here for more F5 Audio.

ps

twitter: @psilvas

Technorati Tags: F5, infrastructure 2.0, integration, cloud connect, Pete Silva, security, business, education, technology, application delivery, intercloud, cloud, context-aware, infrastructure 2.0, automation, web, internet, data center

00:0000:00




« Older episodes ·