Archive for availability

Configure HA Groups on BIG-IP

Posted in f5, big-ip, availability, load balance, application delivery, devcentral by psilva on April 25th, 2017

Last week we talked about how HA Groups work on BIG-IP and this week we’ll look at how to configure HA Groups in BIG-IP.

To recap, an HA group is a configuration object you create and assign to a traffic group for devices in a device group. An HA group defines health criteria for a resource (such as an application server pool) that the traffic group uses. With an HA group, the BIG-IP system can decide whether to keep a traffic group active on its current device or fail over the traffic group to another device when resources such as pool members fall below a certain level.

First, some prerequisites:

  • Basic Setup: Each BIG-IP (v13) is licensed, provisioned and configured to run BIG-IP LTM
  • HA Configuration: All BIG-IP devices are members of a sync-failover device group and synced
  • Each BIG-IP has a unique virtual server with a unique server pool assigned to it
  • All virtual addresses are associated with traffic-group-1

To the BIG-IP GUI!

First you go to System>High Availability>HA Group List>and then click the Create button.

hab1.jpg

 

The first thing is to name the group. Give it a detailed name to indicate the object group type, the device it pertains to and the traffic group it pertains to. In this case, we’ll call it ‘ha_group_deviceA_tg1.’

hab2.jpg

 

Next, we’ll click Add in the Pools area under Health Conditions and add the pool for BIG-IP A to the HA Group which we’ve already created. We then move on to the minimum member count. The minimum member count is members that need to be up for traffic-group-1 to remain active on BIG-IP A. In this case, we want 3 out of 4 members to be up. If that number falls below 3, the BIG-IP will automatically fail the traffic group over to another device in the device group.

hab34.jpg

 

Next is HA Score and this is the sufficient threshold which is the number of up pool members you want to represent a full health score. In this case, we’ll choose 4. So if 4 pool members are up then it is considered to have a full health score. If fewer than 4 members are up, then this health score would be lower. We’ll give it a default weight of 10 since 10 represents the full HA score for BIG-IP A. We’re going to say that all 4 members need to be active in the group in order for BIG-IP to give BIG-IP A an HA score of 10. And we click Add.

hab6.jpg

 

We’ll then see a summary of the health conditions we just specified including the minimum member count and sufficient member count. Then click Create HA Group.

hab7.jpg

 

Next, we go to Device Management>Traffic Groups>and click on traffic-group-1.

hab8.jpg

 

Now, we’ll associate this new HA Group with traffic-group-1. Go to the HA Group setting and select the new HA Group from the drop-down list. And then select the Failover Method to Device with the Best HA Score. Click Save.

hab81.jpg

 

Now we do the same thing for BIG-IP B. So again, go to System>High Availability>HA Group List>and then click the Create button. Give it a special name, click Add in the Pools area and select the pool you’ve already created for BIG-IP B. Again, for our situation, we’ll specify a minimum of 3 members to be up if traffic-group-1 is active on BIG-IP B. This minimum number does not have to be the same as the other HA Group, but it is for this example. Again, a default weight of 10 in the HA Score for all pool members. Click Add and then Create HA Group for BIG-IP B.

hab82.jpg

 

And then, Device Management>Traffic Groups> and click traffic-group-1. Choose BIG-IP B’s HA Group and select the same Failover method as BIG-IP A – Based on HA Score. Click Save.

Lastly, you would create another HA Group on BIG-IP C as we’ve done on BIG-IP A and BIG-IP B. Once that happens, you’ll have the same set up as this:

ha2a.jpg

As you can see, BIG-IP A has lost another pool member causing traffic-group-1 to failover and the BIG-IP software has chosen BIG-IP C as the next active device to host the traffic group because BIG-IP C has the highest HA Score based on the health of its pool.

Thanks to our TechPubs group for the basis of this article and check out a video demo here.

ps

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 




High Availability Groups on BIG-IP

Posted in f5, big-ip, availability, adc, application delivery by psilva on April 18th, 2017

 

High Availability of applications is critical to an organization’s survival.

On BIG-IP, HA Groups is a feature that allows BIG-IP to fail over automatically based not on the health of the BIG-IP system itself but rather on the health of external resources within a traffic group. These external resources include the health and availability of pool members, trunk links, VIPRION cluster members or a combination of all three. This is the only cause of failover that is triggered based on resources outside of the BIG-IP.

An HA group is a configuration object you create and assign to a traffic group for devices in a device group. An HA group defines health criteria for a resource (such as an application server pool) that the traffic group uses. With an HA group, the BIG-IP system can decide whether to keep a traffic group active on its current device or fail over the traffic group to another device when resources such as pool members fall below a certain level.

In this scenario, there are three BIG-IP Devices – A, B, C and each device has two traffic groups on it.

ha1.jpg

 

 

As you can see, for BIG-IP A, traffic-group 1 is active. For BIG-IP B, traffic-group 2 is active and for BIG-IP C, both traffic groups are in a standby state. Attached to traffic-group 1 on BIG-IP A is an HA group which specifies that there needs to be a minimum of 3 pool members out of 4 to be up for traffic-group-1 to remain active on BIG-IP A. Similarly, on BIG-IP B the traffic-group needs a minimum of 3 pool members up out of 4 for this traffic group to stay active on BIG-IP B.

On BIG-IP A, if fewer than 3 members of traffic-group-1 are up, this traffic-group will failover.

So let’s say that 2 pool members go down on BIG-IP A. Traffic-group-1 responds by failing-over to the device (BIG-IP) that has the healthiest pool…which in this case is BIG-IP C.

ha2.jpg

 

Now we see that traffic-group-1 is active on BIG-IP C.

Achieving the ultimate ‘Five Nines’ of web site availability (around 5 minutes of downtime a year) has been a goal of many organizations since the beginning of the internet era. There are several ways to accomplish this but essentially a few principles apply.

  • Eliminate single points of failure by adding redundancy so if one component fails, the entire system still works.
  • Have reliable crossover to the duplicate systems so they are ready when needed.
  • And have the ability to detect failures as they occur so proper action can be taken.

If the first two are in place, hopefully you never see a failure. But if you do, HA Groups can help.

ps

Related:

 

 

 




What is an Application Delivery Controller - Part II

Posted in f5, big-ip, availability, adc, load balance, application delivery, devcentral, infrastructure by psilva on January 26th, 2017

devcentral_basics_article_banner.png

Application Delivery Basics

One of the unfortunate effects of the continued evolution of the load balancer into today's application delivery controller (ADC) is that it is often too easy to forget the basic problem for which load balancers were originally created—producing highly available, scalable, and predictable application services. We get too lost in the realm of intelligent application routing, virtualized application services, and shared infrastructure deployments to remember that none of these things are possible without a firm basis in basic load balancing technology. So how important is load balancing, and how do its effects lead to streamlined application delivery?

Let’s examine the basic application delivery transaction. The ADC will typically sit in-line between the client and the hosts that provide the services the client wants to use. As with most things in application delivery, this is not a rule, but more of a best practice in a typical deployment. Let's also assume that the ADC is already configured with a virtual server that points to a cluster consisting of two service points. In this deployment scenario, it is common for the hosts to have a return route that points back to the load balancer so that return traffic will be processed through it on its way back to the client.

The basic application delivery transaction is as follows:

  1. The client attempts to connect with the service on the ADC.
  2. The ADC accepts the connection, and after deciding which host should receive the connection, changes the destination IP (and possibly port) to match the service of the selected host (note that the source IP of the client is not touched).
  3. The host accepts the connection and responds back to the original source, the client, via its default route, the load balancer.
  4. The ADC intercepts the return packet from the host and now changes the source IP (and possible port) to match the virtual server IP and port, and forwards the packet back to the client.
  5. The client receives the return packet, believing that it came from the virtual server or host, and continues the process.

how_lb_works.png

 

Figure 1. A basic load balancing transaction.

This very simple example is relatively straightforward, but there are a couple of key elements to take note of. First, as far as the client knows, it sends packets to the virtual server and the virtual server responds—simple. Second, the NAT takes place. This is where the ADC replaces the destination IP sent by the client (of the virtual server) with the destination IP of the host to which it has chosen to load balance the request. Step three is the second half of this process (the part that makes the NAT "bi-directional"). The source IP of the return packet from the host will be the IP of the host; if this address were not changed and the packet was simply forwarded to the client, the client would be receiving a packet from someone it didn't request one from, and would simply drop it. Instead, the ADC, remembering the connection, rewrites the packet so that the source IP is that of the virtual server, thus solving this problem.

The Application Delivery Decision

So, how does the ADC decide which host to send the connection to? And what happens if the selected host isn't working?

Let's discuss the second question first. What happens if the selected host isn't working? The simple answer is that it doesn't respond to the client request and the connection attempt eventually times out and fails. This is obviously not a preferred circumstance, as it doesn't ensure high availability. That's why most ADC technology includes some level of health monitoring that determines whether a host is actually available before attempting to send connections to it.

There are multiple levels of health monitoring, each with increasing granularity and focus. A basic monitor would simply PING the host itself. If the host does not respond to PING, it is a good assumption that any services defined on the host are probably down and should be removed from the cluster of available services. Unfortunately, even if the host responds to PING, it doesn't necessarily mean the service itself is working. Therefore most devices can do "service PINGs" of some kind, ranging from simple TCP connections all the way to interacting with the application via a scripted or intelligent interaction. These higher-level health monitors not only provide greater confidence in the availability of the actual services (as opposed to the host), but they also allow the load balancer to differentiate between multiple services on a single host. The ADC understands that while one service might be unavailable, other services on the same host might be working just fine and should still be considered as valid destinations for user traffic.

This brings us back to the first question: How does the ADC decide which host to send a connection request to? Each virtual server has a specific dedicated cluster of services (listing the hosts that offer that service) which makes up the list of possibilities. Additionally, the health monitoring modifies that list to make a list of "currently available" hosts that provide the indicated service. It is this modified list from which the ADC chooses the host that will receive a new connection. Deciding the exact host depends on the ADC algorithm associated with that particular cluster. The most common is simple round-robin where the ADC simply goes down the list starting at the top and allocates each new connection to the next host; when it reaches the bottom of the list, it simply starts again at the top. While this is simple and very predictable, it assumes that all connections will have a similar load and duration on the back-end host, which is not always true. More advanced algorithms use things like current-connection counts, host utilization, and even real-world response times for existing traffic to the host in order to pick the most appropriate host from the available cluster services.

Sufficiently advanced application delivery systems will also be able to synthesize health monitoring information with load balancing algorithms to include an understanding of service dependency. This is the case when a single host has multiple services, all of which are necessary to complete the user's request. A common example would be in e-commerce situations where a single host will provide both standard HTTP services (port 80) as well as HTTPS (SSL/TLS at port 443) and any other potential service ports that need to be allowed. In many of these circumstances, you don't want a user going to a host that has one service operational, but not the other. In other words, if the HTTPS services should fail on a host, you also want that host's HTTP service to be taken out of the cluster list of available services. This functionality is increasingly important as HTTP-like services become more differentiated with this things like XML and scripting.

To Load Balance or Not to Load Balance?

Load balancing in regards to picking an available service when a client initiates a transaction request is only half of the solution. Once the connection is established, the ADC must keep track of whether the following traffic from that user should be load balanced. There are generally two specific issues with handling follow-on traffic once it has been load balanced: connection maintenance and persistence.

Connection maintenance

If the user is trying to utilize a long-lived TCP connection (telnet, FTP, and more) that doesn't immediately close, the ADC must ensure that multiple data packets carried across that connection do not get load balanced to other available service hosts. This is connection maintenance and requires two key capabilities: 1) the ability to keep track of open connections and the host service they belong to; and 2) the ability to continue to monitor that connection so the connection table can be updated when the connection closes. This is rather standard fare for most ADCs.

Persistence

Increasingly more common, however, is when the client uses multiple short-lived TCP connections (for example, HTTP) to accomplish a single task. In some cases, like standard web browsing, it doesn't matter and each new request can go to any of the back-end service hosts; however, there are many more instances (XML, JavaScript, e-commerce "shopping cart," HTTPS, and so on) where it is extremely important that multiple connections from the same user go to the same back-end service host and not be load balanced. This concept is called persistence, or server affinity. There are multiple ways to address this depending on the protocol and the desired results. For example, in modern HTTP transactions, the server can specify a "keep-alive" connection, which turns those multiple short-lived connections into a single long-lived connection that can be handled just like the other long-lived connections. However, this provides little relief. Even worse, as the use of web and mobile services increases, keeping all of these connections open longer than necessary would strain the resources of the entire system. In these cases, most ADCs provide other mechanisms for creating artificial server affinity.

One of the most basic forms of persistence is source-address affinity. Source address affinity persistence directs session requests to the same server based solely on the source IP address of a packet. This involves simply recording the source IP address of incoming requests and the service host they were load balanced to, and making all future transaction go to the same host. This is also an easy way to deal with application dependency as it can be applied across all virtual servers and all services. In practice however, the wide-spread use of proxy servers on the Internet and internally in enterprise networks renders this form of persistence almost useless; in theory it works, but proxy-servers inherently hide many users behind a single IP address resulting in none of those users being load balanced after the first user's request—essentially nullifying the ADC capability. Today, the intelligence of ADCs allows organizations to actually open up the data packets and create persistence tables for virtually anything within it. This enables them to use much more unique and identifiable information, such as user name, to maintain persistence. However, organizations one must take care to ensure that this identifiable client information will be present in every request made, as any packets without it will not be persisted and will be load balanced again, most likely breaking the application.

Final Thoughts

It is important to understand that basic load balancing technology, while still in use, is now only considered a feature of Application Delivery Controllers. ADCs evolved from the first load balancers through the service virtualization process and today with software only virtual editions. They can not only improve availability, but also affect the security and performance of the application services being requested.

Today, most organizations realize that simply being able to reach an application doesn't make it usable; and unusable applications mean wasted time and money for the enterprise deploying them. ADCs enable organizations to consolidate network-based services like SSL/TLS offload, caching, compression, rate-shaping, intrusion detection, application firewalls, and even remote access into a single strategic point that can be shared and reused across all application services and all hosts to create a virtualized Application Delivery Network. Basic load balancing is the foundation without which none of the enhanced functionality of today's ADCs would be possible.

And if you missed What is an ADC Part 1, you can find it here.

ps

Next Steps

Now that you’ve gotten this far, would you like to dig deeper or learn more about how application delivery works? Cool, then check out these resources:

 




Is 2015 Half Empty or Half Full?

Posted in security, f5, availability, cloud computing, silva, mobile, infrastructure, data loss, dns, iot, 2017 by psilva on July 15th, 2015

2015.jpg

With 2015 crossing the half way point, let's take a look at some technology trends thus far.

Breaches: Well, many databases are half empty due to the continued rash of intrusions while the crooks are half full with our personal information. Data breaches are on a record pace this year and according to the Identity Theft Resource Center (ITRC), there have been 400 data incidents as of June 30, 2015. One more than this time last year. And, 117,576,693 records had been compromised. ITRC also noted a 85% increase in the number of breaches within the banking sector. From health care to government agencies to hotel chains to universities and even Major League Baseball, breaches and attacks are now a daily occurrence.

Cloud: Who would've thought this cloud thing would now be half full back in 2008? Over the last couple years, the 'cloud' has become a very viable option for organizations large and small. It is becoming the platform for IoT and many organizations such as Google and GE are now moving critical corporate applications to the cloud. While hybrid is the new normal remember, The Cloud is Still just a Datacenter Somewhere.

DNS: While IPv4 addresses are now completely empty, DNS seems to be half to almost full in 2015. DNS continues to be a target for attackers along with being an enabler for IoT. It is so important that Cisco recently acquired OpenDNS to help fight IoT attacks and the courts got a guilty plea from an Estonian man who altered DNS settings on infected PCs with the DNSChanger malware. I think of DNS as a silent sufferer - you really don't care about it until it doesn't work. Start caring this year.

Internet: Full but still growing. As noted above, IPv4 addresses are gone. Asia, Europe, Latin America and now North America have run out of IPv4 addresses and have exhausted their supplies. If you're wondering how to handle this glass, F5 has some awesome 4to6 and 6to4 solutions.

IoT: Things, sensors and actuators are all the buzz and are certainly half full for 2015. At this time last year, IoT was at the top of the Gartner Hype Cycle and it has certainly not disappointed. Stories abound about Internet of Things Security Risks and Challenges, 10 of the biggest IoT data generators, the Top 10 Worst Wearable Tech Devices So Far, The (Far-Flung) Future Of Wearables, along with the ability to Smell Virtual Environments and if We Need Universal Robot Rights, Ethics And Legislation. RoboEthics, that is.

Mobile: We are mobile, our devices are mobile and the applications we access are now probably mobile also. Mobility, in all it's connotations, is a huge concern for enterprises and it'll only get worse as we start wearing our connected clothing to the office. The Digital Dress Code has emerged. Mobile is certainly half full and there is no empting it now.

Privacy: At this point with all the surveillance, data breaches, gadgets gathering our daily data and our constant need to tell the world what we're doing every second, this is probably bone dry. Pardon, half empty, sticking to the theme.

That's what I got so far and I'm sure 2015's second half will bring more amazement, questions and wonders. We'll do our year in reviews and predictions for 2016 as we all lament, where did 2015 go? There is that old notion that if you see a glass half full, you're an optimist and if you see it half empty you are a pessimist. Actually, you need to understand what the glass itself was before the question. Was it empty and filled half way or was it full and poured out? There's you answer!

ps

Related:

  • It's all contained within the blog.
Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]



The IoT Ready Platform

Posted in security, f5, big-ip, availability, silva, iot, things, sensors by psilva on June 16th, 2015

Over the last couple months, in between some video coverage for events, I've been writing a series of IoT stories. From the basic What are These "Things”? and IoT Influence on Society to the descriptive IoT Effect on Applications and the IoT Ready Infrastructure. I thought it only fair to share how F5 can play within an IoT infrastructure.

Because F5 application services share a common control plane—the F5 platform—we’ve simplified the process of deploying and optimizing IoT application delivery services. With the elastic power of Software Defined Application Services (SDAS), you can rapidly provision IoT application services across the data center and into cloud computing environments, reducing the time and costs associated with deploying new applications and architectures.

The beauty of SDAS is that it can provide the global services to direct the IoT devices to the most appropriate data center or hybrid cloud depending on the request, context, and application health. Customers, employees, and the IoT devices themselves receive the most secure and fastest experience possible.

F5's high-performance services fabric supports traditional and emerging underlay networks. It can deployed a top traditional IP and VLAN-based networks, works with SDN overlay networks using NVGRE or VXLAN (as well as a variety of less well-known overlay protocols) and integrates with SDN network fabrics such as those from Cisco/Insieme, Arista and BigSwitch among others.

Hardware, Software or Cloud

The services fabric model enables consolidation of services onto a common platform that can be deployed on hardware, software or in the cloud. This reduces operational overhead by standardizing management as well as deployment processes to support continuous delivery efforts. By sharing service resources and leveraging fine-grained multi-tenancy, the cost of individual services is dramatically reduced, enabling all IoT applications - regardless of size - to take advantage of services that are beneficial to their security, reliability and performance.

The F5 platform:

  • Provides the network security to protect against inbound attacks
  • Offloads SSL to improve the performance of the application servers
  • Not only understands the application but also know when it is having problems
  • Ensures not only the best end user experience but also quick and efficient data replication

F5 Cloud solutions can automate and orchestrate the deployment of IoT application delivery services across both traditional and cloud infrastructures while also managing the dynamic redirection of workloads to the most suitable location. These application delivery services ensure predictable IoT experiences, replicated security policy, and workload agility.

F5 BIG-IQ™ Cloud can federate management of F5 BIG-IP® solutions across both traditional and cloud infrastructures, helping organizations deploy and manage IoT delivery services in a fast, consistent, and repeatable manner, regardless of the underlying infrastructure. In addition, BIG-IQ Cloud integrates or interfaces with existing cloud orchestration engines such as VMware vCloud Director to streamline the overall process of deploying applications.

Extend, Scale - and Secure

F5 Cloud solutions offer a rapid Application Delivery Network provisioning solution, drastically reducing the lead times for expanding IoT delivery capabilities across data centers, be they private or public. As a result, organizations can efficiently:

  • Extend data centers to the cloud to support IoT deployments
  • Scale IoT applications beyond the data center when required.
  • Secure and accelerate IoT connections to the cloud

For maintenance situations, organizations no longer need to manually redirect traffic by configuring applications. Instead, IoT applications are proactively redirected to an alternate data center prior to maintenance.

For continuous DDoS protection, F5 Silverline DDoS Protection is a service delivered via the F5 Silverline cloud-based platform that provides detection and mitigation to stop even the largest of volumetric DDoS attacks from reaching your IoT network.  

The BIG-IP platform is application and location agnostic, meaning the type of application or where the application lives really does not matter. As long as you tell the BIG-IP platform where to find the IoT application, the BIG-IP platform will deliver it.

Bringing it all together, F5 Synthesis enables cloud and application providers as well as mobile network operators the architectural framework necessary to ensure the performance, reliability and security of IoT applications.

Connected devices are here to stay—forcing us to move forward into this brave new world where almost everything generates data traffic. While there’s much to consider, proactively addressing these challenges and adopting new approaches for enabling an IoT-ready network will help organizations chart a clearer course toward success.

An IoT-ready environment enables IT to begin taking advantage of this societal shift without a wholesale rip-and-replace of existing technology. It also provides the breathing room IT needs to ensure that the coming rush of connected devices does not cripple the infrastructure. This process ensures benefits will be realized without compromising on the operational governance required to ensure availability and security of IoT network, data, and application resources. It also means IT can manage IoT services instead than boxes.

However an IoT ready infrastructure is constructed, it is a transformational journey for both IT and the business. It is not something that should be taken lightly or without a long-term strategy in place. When done properly, F5-powered IoT ready infrastructure can bring significant benefits to an organization and its people.

ps

Related:

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]



Application Availability Between Hybrid Data Centers

Reliable access to mission-critical applications is a key success factor for enterprises. For many organizations, moving applications from physical data centers to the cloud can increase resource capacity and ensure availability while reducing system management and IT infrastructure costs. Achieving this hybrid data center model the right way requires healthy resource pools and the means to distribute them. The F5 Application Availability Between Hybrid Data Centers solution provides core load-balancing, DNS and acceleration services that result in non-disruptive, seamless migration between private and public cloud environments.

Check out the new Reference Architecture today along with a new video!

ps

Related:

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]



OK 2015, Now What?

Posted in security, f5, availability, silva, control, mobile, infrastructure, big-iq, things, sensors, 2017 by psilva on January 6th, 2015

Once again after a couple weeks off and the calendar odometer flipping another year, I'm sitting here with a blinking curser wondering what to write about. And the thing that pops into my head are Things. The Everythings. While 2014 was the hype year for the Internet of Things (IoT), according to many 2015 will be the year that IoT...and really the Internet of Everything, becomes mainstream. It is occurring this week at CES where tons of smart cars, smart kitchens, smart watches, smart televisions, smart wearables, smart appliances, smart healthcare devices, smart robots, smart belts and anything else that has a sensor, a chip and is connected to the internet will be on display. I wonder if terms like smart aleck and smarty pants might soon be in vogue.

While the Hover skateboard originally slated for 2015 is still in the works, there is a massive amount of info related to Things and how they are going to change society, change how we live and change us, as people.

Business Insider has a fascinating slide deck showing the most important ways the Internet of Everything market will develop, the benefits newly connected devices will offer consumers and businesses, and the potential barriers that could inhibit growth. IoT will be the largest device market, by far, and will soon be larger than the PC, tablet, and smartphone markets combined. The software to run IoT along with systems to make sense of all that data will be huge. Areas like enhanced customer service and improved use of field assets have already been realized by early adaptors. Moving forward, new business models will blossom and services will become more important than simple products. How they all work together will be key.

IoT is not without it's challenges. Threats to data security, physical security, the security of devices, regulations, privacy, encryption, authentication and a host of other issues all need to be addressed before this can really take off. Anyone remember the Cloud a couple years ago? Themes are the same. While consumer devices seem to be the focus today, businesses will benefit with greater operational efficiency along with helping them manage plants, property and equipment.

Trend Micro also has a good IoE 101 article with 5 easy steps to explain IoT and IoE to folks. Over on LinkedIn, Jeremy Geelan has put together a great list of the many various, although not exhaustive, IoT events for 2015. He's revised it once already and just might again as more arrive. Over on Computer Business Review, they have their Top 6 Wearable Predictions for 2015 and Gartner is predicting that by 2017, 30% of the wearables will be invisible to the human eye.

No matter what, all these things will need a robust, scalable and intelligent infrastructure to handle the massive traffic growth. If you thought our mobile phones & tablets generated a lot of traffic, our Things will be a multitude of what mobile contributed. Get ready now...

ps

Related:

 

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]



Blog Roll 2014

Posted in security, f5, big-ip, availability, cloud computing, silva, blogging, cybercrime, family, sensors by psilva on December 16th, 2014

It’s that time of year when we gift and re-gift, just like this text from last year. And the perfect opportunity to re-post, re-purpose and re-use all my 2014 blog entries. If you missed any of the 96 attempts including 57 videos, here they are wrapped in one simple entry. I read somewhere that lists in blogs are good. I broke it out by month to see what was happening at the time and let's be honest, pure self promotion. 

Thanks for reading and watching throughout 2014.

Have a Safe and Happy New Year.

 

January

February

March

April

May

June

July

August

September

October

November

December

And a couple special holiday themed entries from years past.

 

ps

Related

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]



Available Applications Anywhere

The path to successful application delivery has been a long and winding road for many companies.

Back in the days of Y2K and the dot-coms, applications were often delivered out of a physical data center. This usually consisted of a dedicated raised-floor room at the corporate headquarters or leased colocation space from one of the web hosting vendors—or both.

Soon, global organizations and ecommerce sites started to distribute their applications and deploy them at multiple physical data centers to address geo-location, redundancy, and disaster recovery challenges. This was an expensive endeavor even without the networking, bandwidth, and leased line costs.

Enter the cloud

When server virtualization emerged and organizations realized that they had the ability to divide resources for different applications, content delivery was no longer tethered 1:1 with a physical device. Content could live anywhere. With virtualization technology as the driving force, cloud computing formed and offered yet another avenue to deliver applications.

As cloud adoption grew, along with the software, platforms, and infrastructures enabling it, organizations were able to quickly, easily, and cost effectively distribute their resources around the globe. This allowed organizations to place content closer to the user depending on their location, and provided some fault tolerance in case of a data outage. Cloud also offers organizations a way to manage services rather than boxes along with just-in-time provisioning rather than over provisioning, which can be costly. Cloud enables IT as a Service and the flexibility to scale when needed.

Today, there is a mixture of options available to deliver critical applications. Many organizations have private, owned, on-premises data center facilities. Others lease resources at a dedicated location.

Staying a step ahead

In order to achieve or even maintain continuous application availability and keep up with the pace of new application rollouts, many organizations are looking to expand their data center options, including cloud, to ensure application availability. This is important since 84 percent of data centers had issues with power, space, cooling capacity, assets, and uptime that negatively impacted business operations according to IDC. That translates into application rollout delays, disrupted customer service, or unplanned expenses for emergency fixes.

Many organizations have found that operating multiple data centers is no easy task. New data center deployments or even the integration of existing data centers can cause havoc for visitors, employees, and IT staff alike. Public web properties, employee access to corporate resources, and communication tools such as email require security and back-end data replication for content consistency. On top of that, maintaining control over critical systems spread around the globe is always a challenge.

Simplify. Scale. Secure.

The BIG-IP platform provides organizations with global application services for DNS, federated identity, security, SSL off-load, optimization and application health and availability. Together, they create an intelligent, cost-effective, resilient global application delivery infrastructure across a hybrid mix of data centers. As companies simplify, secure, and consolidate across multiple data centers, they mitigate the impact to users or applications, minimize downtime, ensure continuous availability, and have on-demand scalability as needed.

ps

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]



Highly Available Hybrid

Achieving the ultimate ‘Five Nines’ of web site availability (around 5 minutes of downtime a year) has been a goal of many organizations since the beginning of the internet era. There are several ways to accomplish this but essentially a few principles apply.

  • Eliminate single points of failure by adding redundancy so if one component fails, the entire system still works.
  • Have reliable crossover to the duplicate systems so they are ready when needed.
  • And have the ability to detect failures as they occur so proper action can be taken.

If the first two are in place, hopefully you never see a failure but maintenance is a must.

BIG-IP high availability (HA) functionality, such as connection mirroring, configuration synchronization, and network failover, allow core system services to be available for BIG-IP to manage in the event that a particular application instance becomes unavailable. Organizations can synchronize BIG-IP configurations across data centers to ensure the most up to date policy is being enforced throughout the entire infrastructure. In addition, BIG-IP itself can be deployed as a redundant system either in active/standby or active/active mode.

soldiag.jpg

Web applications come in all shapes and sizes from static to dynamic, from simple to complex from specific to general. No matter the size, availability is important to support the customers and the business. The most basic high-availability architecture is the typical 3-tier design. A pair of ADCs in the DMZ terminates the connection; they in turn intelligently distribute the client request to a pool (multiple) of application servers which then query the database servers for the appropriate content. Each tier has redundant servers so in the event of a server outage, the others take the load and the system stays available.

This is a tried and true design for most operations and provides resilient application availability within a typical data center. But fault tolerance between two data centers is even more reliable than multiple servers in a single location, simply because that one data center is a single point of failure.

A hybrid data center approach allows organizations to not only distribute their applications when it makes sense but can also provide global fault tolerance to the system overall. Depending on how an organization’s disaster recovery infrastructure is designed, this can be an active site, a hot-standby, some leased hosting space, a cloud provider or some other contained compute location. As soon as that server, application, or even location starts to have trouble, organizations can seamlessly maneuver around the issue and continue to deliver their applications.

Driven by applications and workloads, a hybrid data center is really a technology strategy of the entire infrastructure mix of on premise and off-premise data compute resources.  IT workloads reside in conventional enterprise IT (legacy systems), an on premise private cloud (mission critical apps), at a third-party off-premise location (managed, hosting or cloud provider) or a combination of all three.

The various combinations of hybrid data center types can be as diverse as the industries that use them. Enterprises probably already have some level of hybrid, even if it is a mix of owned space plus SaaS. Enterprises typically like to keep sensitive assets in house but have started to migrate workloads to hybrid data centers. Financial industries might have different requirements than retail. Startups might start completely with a cloud based service and then begin to build their own facility if needed. Mobile app developers, particularly games, often use the cloud for development and then bring it in-house once it is released. Enterprises, on the other hand, have (historically) developed in house and then pushed out to a data center when ready. The variants of industries, situations and challenges the hybrid approach can address is vast.

Manage services rather than boxes.

ps

Related

 

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]




« Older episodes ·