Archive for application delivery

Deploying F5’s Web Application Firewall in Microsoft Azure Security Center

Posted in security, f5, big-ip, cloud, cloud computing, silva, microsoft, application delivery, waf, azure by psilva on May 9th, 2017

Use F5’s Web Application Firewall (WAF) to protect web applications deployed in Microsoft Azure.

Applications living in the Cloud still need protection. Data breaches, compromised credentials, system vulnerabilities, DDoS attacks and shared resources can all pose a threat to your cloud infrastructure. The Verizon DBIR notes that web application attacks are the most likely vector for a data breach attack. While attacks on web applications account for only 8% of reported incidents, according to Verizon, they are responsible for over 40% of incidents that result in a data breach. A 2015 survey found that 15% of logins for business apps used by organizations had been breached by hackers.

One way to stay safe is using a Web Application Firewall (WAF) for your cloud deployments.

Let’s dig in on how to use F5’s WAF to protect web applications deployed in Microsoft Azure. This solution builds on BIG-IP Application Security Manager (ASM) and BIG-IP Local Traffic Manager (LTM) technologies as a preconfigured virtual service within the Azure Security Center.

Some requirements for this deployment are:

  • You have an existing web application deployed in Azure that you want to protect with BIG-IP ASM
  • You have an F5 license token for each instance of BIG-IP ASM you want to use

To get started, log into your Azure dashboard and on the left pane, toward the bottom, you’ll see Security Center and click it.

awaf1.jpg

Next, you’ll want to click the Recommendations area within the Security Center Overview.

awaf2.jpg

And from the list of recommendations, click Add a web application firewall.

awaf3.jpg

A list of available web applications opens in a new pane. From the application list, select the application you want to secure.

awaf5.jpg

And from there click Create New. You’ll get a list of available vendors’ WAFs and choose F5 Networks.

awaf7.jpg

A new page with helpful links and information appears and at the bottom of the page, click Create.

awaf8.jpg

First, select the number of machines you want to deploy – in this case we’re deploying two machines for redundancy and high availability. Review the host entry and then type a unique password for that field. When you click Pricing Tier, you can get info about sizing and pricing. When you are satisfied, at the bottom of that pane click OK.

awaf82.jpg

Next, in the License token field, copy and paste your F5 license token. If you are only deploying one machine, you’ll only see one field. For the Security Blocking Level, you can choose Low, Medium or High. You can also click the icon for a brief description of each level. From the Application Type drop down, select the type of application you want to protect and click OK (at the bottom of that pane).

awaf83.jpg

Once you see two check marks, click the Create button.

awaf84.jpg

Azure then begins the process of the F5 WAF for your application. This process can take up to an hour. Click the little bell notification icon for the status of the deployment.

awaf8687.jpg

You’ll receive another notification when the deployment is complete.

awaf88.jpg

After the WAF is successfully deployed, you’ll want to test the new F5 WAF and finalize the setup in Azure including changing the DNS records from the current server IP to the IP of the WAF.

When ready, click Security Center again and the Recommendations panel. This time we’ll click Finalize web application firewall setup.

awaf9.jpg

And click your Web application.

awaf91.jpg

Ensure your DNS settings are correct and check the I updated my DNS Settings box and when ready, click Restrict Traffic at the bottom of the pane.

awaf92.jpg

Azure will give you a notification that it is finalizing the WAF configuration and settings, and you will get another notification when complete.

awaf93.jpg

And when it is complete, your application will be secured with F5’s Web Application Firewall.

Check out the demo video and rest easy, my friend.

ps

Related:




Configure HA Groups on BIG-IP

Posted in f5, big-ip, availability, load balance, application delivery, devcentral by psilva on April 25th, 2017

Last week we talked about how HA Groups work on BIG-IP and this week we’ll look at how to configure HA Groups in BIG-IP.

To recap, an HA group is a configuration object you create and assign to a traffic group for devices in a device group. An HA group defines health criteria for a resource (such as an application server pool) that the traffic group uses. With an HA group, the BIG-IP system can decide whether to keep a traffic group active on its current device or fail over the traffic group to another device when resources such as pool members fall below a certain level.

First, some prerequisites:

  • Basic Setup: Each BIG-IP (v13) is licensed, provisioned and configured to run BIG-IP LTM
  • HA Configuration: All BIG-IP devices are members of a sync-failover device group and synced
  • Each BIG-IP has a unique virtual server with a unique server pool assigned to it
  • All virtual addresses are associated with traffic-group-1

To the BIG-IP GUI!

First you go to System>High Availability>HA Group List>and then click the Create button.

hab1.jpg

 

The first thing is to name the group. Give it a detailed name to indicate the object group type, the device it pertains to and the traffic group it pertains to. In this case, we’ll call it ‘ha_group_deviceA_tg1.’

hab2.jpg

 

Next, we’ll click Add in the Pools area under Health Conditions and add the pool for BIG-IP A to the HA Group which we’ve already created. We then move on to the minimum member count. The minimum member count is members that need to be up for traffic-group-1 to remain active on BIG-IP A. In this case, we want 3 out of 4 members to be up. If that number falls below 3, the BIG-IP will automatically fail the traffic group over to another device in the device group.

hab34.jpg

 

Next is HA Score and this is the sufficient threshold which is the number of up pool members you want to represent a full health score. In this case, we’ll choose 4. So if 4 pool members are up then it is considered to have a full health score. If fewer than 4 members are up, then this health score would be lower. We’ll give it a default weight of 10 since 10 represents the full HA score for BIG-IP A. We’re going to say that all 4 members need to be active in the group in order for BIG-IP to give BIG-IP A an HA score of 10. And we click Add.

hab6.jpg

 

We’ll then see a summary of the health conditions we just specified including the minimum member count and sufficient member count. Then click Create HA Group.

hab7.jpg

 

Next, we go to Device Management>Traffic Groups>and click on traffic-group-1.

hab8.jpg

 

Now, we’ll associate this new HA Group with traffic-group-1. Go to the HA Group setting and select the new HA Group from the drop-down list. And then select the Failover Method to Device with the Best HA Score. Click Save.

hab81.jpg

 

Now we do the same thing for BIG-IP B. So again, go to System>High Availability>HA Group List>and then click the Create button. Give it a special name, click Add in the Pools area and select the pool you’ve already created for BIG-IP B. Again, for our situation, we’ll specify a minimum of 3 members to be up if traffic-group-1 is active on BIG-IP B. This minimum number does not have to be the same as the other HA Group, but it is for this example. Again, a default weight of 10 in the HA Score for all pool members. Click Add and then Create HA Group for BIG-IP B.

hab82.jpg

 

And then, Device Management>Traffic Groups> and click traffic-group-1. Choose BIG-IP B’s HA Group and select the same Failover method as BIG-IP A – Based on HA Score. Click Save.

Lastly, you would create another HA Group on BIG-IP C as we’ve done on BIG-IP A and BIG-IP B. Once that happens, you’ll have the same set up as this:

ha2a.jpg

As you can see, BIG-IP A has lost another pool member causing traffic-group-1 to failover and the BIG-IP software has chosen BIG-IP C as the next active device to host the traffic group because BIG-IP C has the highest HA Score based on the health of its pool.

Thanks to our TechPubs group for the basis of this article and check out a video demo here.

ps

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 




Lightboard Lessons: The BIG-IP Profiles

Posted in Uncategorized, f5, big-ip, application delivery, lightboard, devcentral by psilva on April 19th, 2017

BIG-IP can manage application-specific network traffic in a variety of ways, depending on the protocols and services being used. On BIG-IP, Profiles are a set of tools that you can use to intelligently control the behavior of that traffic.

In this Lightboard Lesson, I light up the BIG-IP Profiles. What they are, what they do and why you should care.

 

ps

Related:

Watch Now:



High Availability Groups on BIG-IP

Posted in f5, big-ip, availability, adc, application delivery by psilva on April 18th, 2017

 

High Availability of applications is critical to an organization’s survival.

On BIG-IP, HA Groups is a feature that allows BIG-IP to fail over automatically based not on the health of the BIG-IP system itself but rather on the health of external resources within a traffic group. These external resources include the health and availability of pool members, trunk links, VIPRION cluster members or a combination of all three. This is the only cause of failover that is triggered based on resources outside of the BIG-IP.

An HA group is a configuration object you create and assign to a traffic group for devices in a device group. An HA group defines health criteria for a resource (such as an application server pool) that the traffic group uses. With an HA group, the BIG-IP system can decide whether to keep a traffic group active on its current device or fail over the traffic group to another device when resources such as pool members fall below a certain level.

In this scenario, there are three BIG-IP Devices – A, B, C and each device has two traffic groups on it.

ha1.jpg

 

 

As you can see, for BIG-IP A, traffic-group 1 is active. For BIG-IP B, traffic-group 2 is active and for BIG-IP C, both traffic groups are in a standby state. Attached to traffic-group 1 on BIG-IP A is an HA group which specifies that there needs to be a minimum of 3 pool members out of 4 to be up for traffic-group-1 to remain active on BIG-IP A. Similarly, on BIG-IP B the traffic-group needs a minimum of 3 pool members up out of 4 for this traffic group to stay active on BIG-IP B.

On BIG-IP A, if fewer than 3 members of traffic-group-1 are up, this traffic-group will failover.

So let’s say that 2 pool members go down on BIG-IP A. Traffic-group-1 responds by failing-over to the device (BIG-IP) that has the healthiest pool…which in this case is BIG-IP C.

ha2.jpg

 

Now we see that traffic-group-1 is active on BIG-IP C.

Achieving the ultimate ‘Five Nines’ of web site availability (around 5 minutes of downtime a year) has been a goal of many organizations since the beginning of the internet era. There are several ways to accomplish this but essentially a few principles apply.

  • Eliminate single points of failure by adding redundancy so if one component fails, the entire system still works.
  • Have reliable crossover to the duplicate systems so they are ready when needed.
  • And have the ability to detect failures as they occur so proper action can be taken.

If the first two are in place, hopefully you never see a failure. But if you do, HA Groups can help.

ps

Related:

 

 

 




Deploy BIG-IP VE in Microsoft Azure Using an ARM Template

Posted in f5, big-ip, cloud, application delivery, devcentral, azure, access by psilva on April 12th, 2017

arm_logo1.jpgAzure Resource Manager (ARM) templates allow you to repeatedly deploy applications with confidence. The resources are deployed in a consistent state and you can easily manage and visualize resources for your application.

ARM templates take the guesswork out of creating repeatable applications and environments. Deploy and deploy again, consistently.

Let’s walk through how to deploy a simple, single-NIC configuration of BIG-IP VE in Microsoft Azure using an ARM template.

First, go to the F5 Networks Github site where we keep our supported templates. There are other community-based templates at www.github.com/f5devcentral if needed but for F5 supported templates, go to the F5 Networks site.

ave1.jpg

To view Azure templates, click f5-azure-arm-templates. In that folder you’ll see experimental and right under that is supported (the one you want).

ave2.jpg

Then click on the standalone folder and then the 1nic folder, which is the simplest deployment.

ave34.jpg

And as you scroll through and review the ‘Read Me,’ you’ll see the Deploy to Azure button under Installation. Select either Bring Your Own License (BYOL) or Pay As You Go (PAYG), depending on your situation.

ave5.jpg

This will launch the Azure Portal and the only thing you’ll really need is a license key if you chose BYOL. Then simply fill out the template.

In this case, we’re going to use an existing resource group that already contains an application.

ave7.jpg

Important: In the Settings section under Admin UN/PW, enter the credentials you want to use to log in to BIG-IP VE. The DNS Label (where you see REQUIRED) will be used to access your BIG-IP VE, for example, if you enter mybigip, the address will be something like  ‘mybigip.westus.cloudapp.azure.com.’ Give the Instance Name something familiar for easy finding.

ave9.jpg

There are different Azure Instance Types, which determine CPU and memory for your VM, and F5 licensing (Good/Better/Best), which determines the BIG-IP modules you can deploy. Then, if needed, enter your BYOL license key.

In addition, to be more secure, you should enter a range of IP addresses on your network in the Restricted Src Addresses field so it’s locked to your address range. This setting determines who gets access to the BIG-IP instance in Azure, so you’ll want to lock it down.

After the tag values, agree to the terms and conditions and click Purchase.

ave91.jpg

Next, you can monitor progress on the deploy status. Keep hitting refresh and you’ll start seeing resources getting populated along with the top blue ‘Deploying’ indicator. When the Deploying bar disappears, you know you’re done.

ave93.jpg

Once complete, you get the notification that the BIG-IP VE was deployed successfully. Next, we’ll navigate to the resource group we selected at the top and then the security group for the BIG-IP.

ave96.jpg

You can see that within the security rules we’ve allowed ports 443 (HTTPS) and 22 (SSH). 22 allows access to the management port; this is the way we’d connect to the BIG-IP to configure and administer.

ave97.jpg

Going back to the resources, the BIG-IP VE itself is listed at the top.

ave98.jpg

When we click on the Virtual BIG-IP we can get the IP address and using a browser through port 443, we can connect either with the DNS name or the IP address to the config utility.

ave97.jpg

Here you would enter the Azure credentials you specified in the template.

ave991.jpg

And that’s all there is to it. Now you can configure your virtual servers, pool, profiles and anything you’d normally do on BIG-IP VE for your unique requirements. Thanks to Suzanne Selhorn for the basis of this article and catch a video demo here.

ps

Related:

 

 




Lightboard Lessons: Service Consolidation on BIG-IP

Posted in f5, adc, silva, application delivery, lightboard, devcentral, infrastructure, consolidate by psilva on March 29th, 2017

The Consolidation of point devices and services in your datacenter or cloud can help with cost, complexity, efficiency, management, provisioning and troubleshooting your infrastructure and systems.

In this Lightboard Lesson, I light up many of the services you can consolidate on BIG-IP.

ps

 

Watch Now:



What is a Proxy?

Posted in Uncategorized, security, big-ip, silva, application delivery, devcentral, proxy by psilva on March 28th, 2017

devcentral_basics_article_banner.png

 

The term ‘Proxy’ is a contraction that comes from the middle English word procuracy, a legal term meaning to act on behalf of another. You may have heard of a proxy vote. Where you submit your choice and someone else votes the ballot on your behalf.

In networking and web traffic, a proxy is a device or server that acts on behalf of other devices. It sits between two entities and performs a service. Proxies are hardware or software solutions that sit between the client and the server and does something to requests and sometimes responses.

The first kind of proxy we’ll discuss is a half proxy. With a Half-Proxy, a client will connect to the proxy and the proxy will establish the session with the servers. The proxy will then respond back to the client with the information. After that initial connection is set up, the rest of the traffic with go right through the proxy to the back-end resources. The proxy may do things like L4 port switching, routing or NAT’ing but at this point it is not doing anything intelligent other than passing traffic.

halfvsfull.jpg
Basically, the half-proxy sets up a call and then the client and server does their thing. Half-proxies are also good for Direct Server Return (DSR). For protocols like streaming protocols, you’ll have the initial set up but instead of going through the proxy for the rest of the connections, the server will bypass the proxy and go straight to the client. This is so you don’t waste resources on the proxy for something that can be done directly server to client.

A Full Proxy on the other hand, handles all the traffic. A full proxy creates a client connection along with a separate server connection with a little gap in the middle. The client connects to the proxy on one end and the proxy establishes a separate, independent connection to the server. This is bi-directionally on both sides. There is never any blending of connections from the client side to the server side – the connections are independent. This is what we mean when we say BIG-IP is a full proxy architecture.

The full proxy intelligence is in that OSI Gap. With a half-proxy, it is mostly client side traffic on the way in during a request and then does what it needs…with a full proxy you can manipulate, inspect, drop, do what you need to the traffic on both sides and in both directions. Whether a request or response, you can manipulate traffic on the client side request, the server side request, the server side response or client side response. You get a lot more power with a full proxy than you would with a half proxy.

reverseproxy_thumb.jpgWith BIG-IP (a full proxy) on the server side it can be used as a reverse proxy. When clients make a request from the internet, they terminate on the reverse proxy sitting in front of application servers. Reverse proxies are good for traditional load balancing, optimization, server side caching, and security functionality. If you know certain clients or IP spaces are acceptable, you can whitelist them. Same with known malicious sources or bad ranges/clients, you can blacklist them. You can do it at the IP layer (L4) or you can go up the stack to Layer 7 and control an http/s request. Or add a BIG-IP ASM policy on there. As it inspects the protocol traffic if it sees some anomaly that is not native to the application like a SQL injection, you can block it.

forwardproxy_2.jpgOn the client side, BIG-IP can also be a forward proxy. In this case, the client connects to the BIG-IP on an outbound request and the proxy acts on behalf of the client to the outside world. This is perfect for things like client side caching (grabbing a video and storing locally), filtering (blocking certain time-wasting sites or malicious content) along with privacy (masking internal resources) along with security.

You can also have a services layer, like an ICAP server, where you can pass traffic to an inspection engine prior to hitting the internet. You can manipulate client side traffic out to the internet, server side in from the internet, handle locally on the platform or or pass off to a third party services entity. A full proxy is your friend in an application delivery environment.

If you'd like to learn more about Proxies, check out the resources below including the Lightboard Lesson: What is a Proxy?

Related:

 




Protecting API Access with BIG-IP using OAuth

Posted in security, f5, big-ip, application delivery, devcentral, api by psilva on March 21st, 2017

As more organizations use APIs in their systems, they've become targets for the not-so-good-doers so API Security is something you need to take seriously. Most APIs today use the HTTP protocol so organizations should protect them as they would ordinary web properties.

Starting in v13, BIG-IP APM is able to act as an OAuth Client, OAuth Resource Server and OAuth Authorization Server. In this example, we will show how to use BIG-IP APM to act as an OAuth Resource Server protecting the API.

In our environment, we’ve published an API (api.f5se.com) and we’re trying to get a list of departments in the HR database. The API is not natively protected and we want APM to enable OAuth protection to this API.

api1.jpg

 

First, let’s try an unauthenticated request.

api2.jpg

 

You can see we get the 401 Unauthorized response which is coming from the BIG-IP. In this instance we’re only sending 3 headers, Connection (close), Content Length (0) and WWW Authenticate (Bearer), indicating to the client that it wants Bearer Token authentication.

api3.jpg

 

So we’ll get that authorization and a new access token.

api4.jpg

 

Here the Getpostman.com (Postman toolchain for API developers) is preconfigured to get a new access token from the OAuth authorization server, which is a BIG-IP.

api5.jpg

 

We will request the OAuth token and will need to authenticate to the BIG-IP.

api6.jpg

 

After that, we get an authorization request. In this case, BIG-IP is acting as an Authorization server and is indicating to the resource owner (us) that there is a HR API application that wants to use certain information (about us) that the authorization server is going to provide.

api7.jpg

 

In this case, it is telling us that the resource server wants to get scope api-email.

We’ll click Authorize and now we have our auth token and is saved in the Postman client.

api8.jpg

 

Within the token properties we see that it expires in 300 seconds, it is a Bearer token and the scope is api-email and we get a refresh token as well.

api9.jpg

 

Now we can add this token to the header and try to make a request again. And this time, we get a much better response.

api91.jpg

We get a 200 OK, along with the headers of the application server. We’ll click on the Body tab within Postman, we’ll see the XML that the API has returned in response to the query for the list of departments that are available. There were no cookies being returned by the server if you were wondering about that tab. In the Headers tab we see the Authorization header that was being sent and the content of the Bearer token.

A simple way to easily protect your APIs leveraging OAuth 2.0 Resource Server capabilities in BIG-IP.

Special thanks to Michael Koyfman for the basis of the content and check out his full demo here.

ps

Related:

 

 

 

 

 

 

 

 

 

 

 

 

 




Lightboard Lessons: What is a Proxy?

Posted in security, f5, big-ip, silva, application delivery, lightboard, devcentral, proxy by psilva on March 15th, 2017

The term ‘Proxy’ is a contraction that comes from the middle English word procuracy, a legal term meaning to act on behalf of another.

In networking and web traffic, a proxy is a device or server that acts on behalf of other devices. It sits between two entities and performs a service. Proxies are hardware or software solutions that sit between the client and the server and do something to requests and sometimes responses.

In this Lightboard Lesson, I light up the various types of proxies.

 

 

 

Watch Now:



Shared Authentication Domains on BIG-IP APM

Posted in security, f5, big-ip, application delivery, authentication, AAA, devcentral, access by psilva on February 14th, 2017

How to share an APM session across multiple access profiles.

A common question for someone new to BIG-IP Access Policy Manager (APM) is how do I configure BIG-IP APM so the user only logs in once.

By default, BIG-IP APM requires authentication for each access profile.

domain_value.jpg

 

This can easily be changed by sending the domain cookie variable is the access profile’s SSO authentication domain menu.

Let’s walk through how to configure App1 and App2 to only require authentication once.

We’ll start with App1’s Access Profile.

dv1.jpg

 

Once you click through to App1’s settings, in the Top menu, select SSO/Auth Domains.

dv2.jpg

 

We’re prompted for authentication and enter our credentials and luckily, we have a successful login.

dv4.jpg

 

And then we’ll try to login to App2. And when we click it, we’re not prompted again for authentication information and gain access without prompts.

dv5.jpg

Granted this was a single login request for two simple applications but it can be scaled for hundreds of applications. If you‘d like to see a working demo of this, check it out here.

ps

 

 

 

 

 

 

 

 

 

 





« Older episodes · Newer episodes »