The inherent risk of a fixed focal point security posture

Fixed Focal point

There are inherent limitations to relying upon traditional Security Information & Event Management Systems or SIEMS, which are often overlooked that every organization must be made aware of. These limitations are: 1) SIEM’s fixed focal point and 2) Dependencies upon structured data sources

Maintaining a fixed focal point (or monitoring just a subset of data) only encourages nefarious opportunists to find vulnerabilities outside of this narrow field of vision. Any experienced security professional will say that all data is security relevant. However, traditional SIEM’s limit their field of vision to just a fixed focal point of data. To understand why this matters, let’s look at an example outside of information technology that’s perhaps easier to follow. Imagine for a moment that three are a string of home break-ins happening in your neighborhood.  To safeguard your property you decide to take precautionary measures. You consult a security professional and they make several recommendations – Place deadbolts on the front and back doors, reinforce locks on the first floor windows, set camera’s and alarm systems above the front and back doors and windows. With all this complete, you rest easier feeling far more secure.   This is what a traditional SIEM does. It takes known vulnerability points and monitors them.

Building upon this example, let’s imagine that a bad person comes along and is intent on breaking into your home.  She cases the house, spots the camera’s, and decides that the windows and doors on the first floor pose too much of a risk of detection. After studying the house for a while, she finds and exploits a blind spot in your defenses. Using a coat hanger, she quickly gains access to the home in just six seconds through the garage without any alarm being tripped. How can this be?  Well, your security professional didn’t view a closed garage door as one of your vulnerability points, so no cameras or security measures were installed there.  As a result, your home has been breached, and no alarms have been triggered since the breach occurred outside your monitored field of vision.  This scenario illustrates the inherent limitation of defining a problem based upon anticipated vulnerabilities.  Determined inventive criminals will figure out ways to defeat known defenses that haven’t been considered. That too is the inherent problem of traditional SIEM’s; they are designed to only look at known threats and vulnerabilities, as a result – they do little to no good alerting you to unanticipated threats or vulnerabilities.

Also, the dependence upon structured data sources also creates another serious security limitation. Traditional SIEMS store information in a relational database. The limitation of this approach is that in order to get information from different sources into a database, users first need to define a structure for this information, then force ably make the data adhere to this defined structure. Oftentimes, imposing this structure leads to relevant security information being left out in this process.

To illustrate why this is an issue, let’s imagine that detectives are trained to only look for finger prints when analyzing a crime scene. Their investigations totally ignore any information that isn’t a finger print – they search for finger prints, partial prints, and if they are really advanced, maybe they’ll include hand and foot prints. However, in the course of their investigation they completely ignore collecting blood, hair, saliva or other DNA related evidence. Now, just how effective would a detective be in solving this case if the criminal wore gloves and shoes? I think everyone would agree that the answer to that question is that wouldn’t be a very effective investigator. Well, that’s exactly what happens by limiting the types of data captured by force fit different data types into a standard database schema – running through a schema format process effectively removes lots of relevant information that can be of great help in an investigation.

Instead, since all data is security relevant, to be truly effective, security professionals must have the ability to collect information from all sources of data in its full fidelity. Since traditional SIEM’s strips out this ability, then it follows that no business should solely rely upon a traditional SIEM for security – make sense?

Instead, what is needed is more of a fluid approach to security, one that captures information from multiple sources, evaluates all known exploits, and allows you to correlate different information to uncover new potential exploits before a report-able data breach occurs. Splunk’s real-time machine data platform is extremely well suited to that task.

 

Permanent link to this article: http://demystifyit.com/the-inherent-risk-of-a-fixed-focal-point-security-posture/

What is the difference between Business Intelligence and Operational Intelligence?

Dashboard

The differences between Operational Intelligence (OI) and Business Intelligence (BI) can be confusing. Just the name, Business Intelligence sounds like Nirvana. Show of hands, who doesn’t want their business to be intelligent? No, the names are fairly ambiguous so let’s turn to Google define to shed some light on their meaning;

Business intelligence, or BI, is an umbrella term that refers to a variety of software applications used to analyze an organization’s raw data. BI as a discipline is made up of several related activities, including data mining, online analytical processing, querying and reporting.

Operational intelligence (OI) is a category of real-time dynamic, business analytics that delivers visibility and insight into data, streaming events and business operations.

These definitions are helpful, but I think the picture above really illustrates the differences quite clearly.  Business Intelligence comes after the fact which is illustrated by looking in the rear view mirror of a car. Therefore, it’s helpful to think about BI as a reference to where you’ve been or what’s happened in the past.  Yes, you can store information in a data mart or data warehouse, and you can “mine that data”, but that doesn’t fundamentally change the fact that the information you are looking at or analyzing occurred sometime in the past.

On the other hand, operational intelligence is represented in the above photograph as the front windshield of a car depicting what’s happening right now in real-time. If you spot a large pothole in the distance, OI will alert you to that fact, and enable you to make a course correction to avoid ruining your alignment;  whereas, BI will only let you know drove through a pothole when your car is wobbling down the road from all the damage.

Most businesses have the potential to leverage Operational Intelligence for competitive gain, but many are still stuck in the past with traditional BI tools. If you want to really crank up your business, I say it’s time to get real-time and discover what a paradigm shift of moving to OI can do for your business.

Permanent link to this article: http://demystifyit.com/what-is-the-difference-between-business-intelligence-and-operational-intelligence/

What is machine data and how can you harness it?

DataImage2Let me begin this post by first describing what machine data is.  Machine data is essentially log file information.  When engineers build systems (hardware or software), they usually incorporate some element of log file capture into the design of those systems for several reasons: first for troubleshooting purposes and second as a backup, in case something unintended happens with the primary system.

As a result, almost every electronic device and software program generate this “machine data”.   It’s fairly safe to say, that most things we interact with on a day-to-day basis captures this machine data.  For example, our automobile, cell phone, ATM, EZ-Pass, Electric Meters, laptops, TV, Online Activity, Servers, Storage Devices, pacemakers, elevators, etc. all generate and locally store this machine data in one form or another.  When we call the mechanic about a “check engine” light warning on our automobile, they ask us to bring the car in to the shop so that they can hook it up to the computer to diagnose the problem, we are leveraging machine data.  What the mechanic is really doing is accessing the machine data stored on our automobile to identify error codes or anomalies that would help them to pinpoint a mechanical problem.  And, the proverbial “Black Box” that is so crucial to explaining why an airplane may have crashed also leverages this machine data.

So, if machine data is everywhere, how come we never heard much about it?

In a word, it’s difficult.  Since machine data comes in lots of different shapes and sizes, it is a difficult proposition to collect and analyze this information against lots of different sources.  Going back to the car example, information collected from different sensors are all fed into one collection point.  The engineers building the automobile are able to dictate requirements to component manufacturers about the shape, format, and frequency of data collection, etc. of all this machine data.  Since they design and build the entire process, they are able to correlate and present this information in a way that useful for mechanics troubleshooting a car problem.

However, if we look at an enterprise IT infrastructure, this same collaboration & integration doesn’t exist.  A typical enterprise will have lots of unrelated components.  From, load balancers, web servers, application server, operating systems, pc’s, storage devices, multiple sites (on premise and in the cloud), to virtual environments, mobile devices, card readers, etc.  So, depending upon the size and scale of the business, they could have lots and lots of machines generating this data.  I personally work with some customers whose server counts are measured in the tens of thousands.

Within the enterprise, no universal format for machine data exists.  This fact creates an enormous challenge for any enterprise looking to unlock the value of machine data. That, combined with the variety, volume, and variability of this machine data can be downright overwhelming.  As a result, enterprises collect the information in silos, and resort to an old school, brute force approach to analyzing this data only when it’s necessary.  If a system or process is failing, a team is assembled from the various IT departments to engage in what can best be compared to an IT scavenger hunt, manually pouring through log files, comparing those files to the cause and effect across other log files throughout the network.  This whole process is so labor intensive and time consuming that if the problem is only intermittent a decision may be made to abandon even identifying the root cause.

Let’s go back to the car example.  Imagine that we bring our car to the mechanic, but instead of simply hooking a computer up to a command and control sensor, the mechanic instead had to connect to and analyze hundreds of different data points on the automobile, and compare all the available data against other data, with the hope of finding the problem.  To further build on this point, let’s suppose that our automobile emits an annoying screech at 35 mph.  We’ve had the car in the shop three times already for the same problem and have spent hundreds of dollars all to no avail.  Eventually, we come to accept the fact the screech as the new normal, and turn the radio up when approaching 35 mph.

There has to be a better way!

Let’s think about this for a minute, what would be needed to get the most value out of this machine data?  Well, if we tried to structure the information by storing it in a database using a schema, we wouldn’t be able to account for the variety of the data.  No, instead we’ll need a way to store information in an unstructured format.  Next, we’ll need a way to get the data from all the different devices to send the information to our unstructured storage in real-time.  Building connectors will be too expensive and difficult to maintain, so what we’ll need is a way to simply forward this machine data in any format to our unstructured storage.  Next, we’ll need be able to search the data, but how can we do that if it’s totally unstructured?  Well, to do that we’’ll need some way to catalog all the data.  Since the value of the data raises exponentially in relation to corresponding information, we’ll also need some way to correlate information across different data types, but how?  So we start to think, what’s common across all these different data types? Eureka! We discover the date that something happened, and the time that it occurred is present within all this data.  We’ll also need a way to extract information from all this data, otherwise, what’s the point of doing all this in the first place?  Hmm, since the data has no structure, creating reports with a traditional BI tool won’t work, besides reports are too rigid to ask the complex questions we will likely be looking for in your data.  Lastly, we’ll need to address the issue of scale and performance.  Whatever we design has to able to bring in massive amounts of data, in real-time, because knowing what’s happening in the present across everything we are running in our enterprise is way more interesting and valuable than what happened last week.

Well, we can continue to ponder ways to solve all these technical challenges, or we can just opt to use Splunk, whose brilliant engineers seemed to have totally nailed it.

Permanent link to this article: http://demystifyit.com/sources-machine-data/

End-to-End Operational Control & Application Management – Do you have it?

End to End

 

So much has been written about the US Government’s Health Insurance Exchange that I’m almost afraid to mention it.  For this posting, I’m going to stay out of the political fray and avoid rendering any opinion about whether we should, or should not have the Affordable Care Act, aka Obamacare.  Instead, I would like to discuss the Challenge of the Health Insurance Exchange strictly from an IT perspective.

The US Government has spent approximately 400 million and counting on the current system.  So far, the system has been down more often than it’s been operational.  Secretary Kathleen Sebelius is on the defensive, and she’s been called before congress to testify about how she spent the money,  what went wrong?, and how she plans to fix it?.  On top of that, her boss, the President of the United States has been forced to acknowledge the problems with the American Public.  You get the idea – the site is a train wreck.  What we discovered is that the project was rushed, the supporting technology was dated, the systems are vastly more complex than originally thought and nothing works as advertised.

Hypothetically speaking, how would solve these technical problem if you were Sebelius?   Bear in mind, you have to change the proverbial tires on the bus while its driving down the road.  Well, I’ve actually given this some thought.  Throwing the whole thing out and starting from scratch isn’t an option – it would take too long, and you have the President of the United States, Congress and US Public breathing down your neck.  No, about the only thing you could do in the short-term is identify, isolate and repair the glitches.  The trouble is, a single transaction spans multiple systems and technologies.  What’s needed is the ability to trace a transaction end-to-end in order to ferret out and address the problems.   Stabilize and fix what you can, and replace what you must.  Once stabilized, you can test and upgrade fragile components.  All this sounds great, but without End-to-End visibility and a single pane of glass to identify problems you wouldn’t know where to start.

I’m quite proud of the fact that I work for a software company that has actually solved this problem.  In fact, my employer (Splunk) is the only machine data platform that I’m aware of that can provide this level of visibility and insight across heterogeneous environments in real-time.  If you simply Splunk it, find it, and fix it, you’ll quickly get a handle on what you need to fix and your priorities.

Are you able to quickly identify and isolate technology problems across all your environments?

Permanent link to this article: http://demystifyit.com/end-end-operational-control/

An Early Warning System Saves: Anxiety, Jobs, & Your Business

 Don't Burn Your Business

Imagine this scenario – you just discovered that your IT systems have been hacked into.  Even worse, after pouring over logs and conducting exhaustive analysis you discover that the data breach had been going on for weeks.  Since that time, the perpetrator or perpetrators  have been systematically siphoning sensitive data from your network.   Now imagine that it’s your job to report this Data Breach to the CEO.   Would your heart just skip a beat? I know mine would.

This  awful scenario, while hypothetical has no doubt been played out both within businesses and government agencies throughout the world.      In addition to the lost trust, embarrassment, operational disruption, and the financial impact on your brand, data privacy laws and regulations  further raise the stakes through imposed fines.  Moreover, Wall Street definitely doesn’t take kindly to Data Breaches.

Here is a not so typical example of how a data breach can play out in the real world, but one you need to be aware of.  This is an account of happened to Heartland Payment System, a Payment Process Provider (NYSE: HPY).  In 2008, Heartland Payment Systems was alerted  to suspicious account  activity related to their customers from their partners Visa and MasterCard.   Heartland conducted an all out investigation and were horrified to discover that they were unaware that a packet sniffer had been surreptitiously installed on their network.

At the time of this discovery, Heartland’s stock was trading at $25. per share, with 36.83m outstanding shares.  As the full extent of the data breach came to light, Wall St. punished the stock.  Over the ensuing months, the stock hit a low of just under $5 per share.  That’s over and 80% drop in value or approximately $736 million. In addition to the massive drop in market cap, Heartland was forced to pay substantial fines, and spent vast sums on consultants and both software and hardware to harden their network.  Overall,according to news articles, the Data Breach cost Heartland $140 million in fines and other hard dollar costs.

What truly shocked everyone was the sheer magnitude of the breach – some 130 million credit card accounts were stolen.   This single data breach could have put Heartland out of business.  It’s taken years, but thanks to some shrewd crisis management, Heartland has regained customer trust, and thankfully, the stock is doing better than ever – trading at $44.47 per share as of this writing.

Heartland’s troubles could have been greatly minimized if they had an early warning system alerting them to network anomalies.   Much like a smoke alarm alerts us to fire, monitoring inbound requests and outbound responses with Splunk would have alerted and directed management to suspicious network activity which could quickly have been rooted out quickly.

To use an analogy – A tiny fire quickly extinguished causes little damage, but if you aren’t aware, that same small fire can become a raging inferno and take down your business.

Public service announcement – Remember to check the batteries in your smoke detectors on your birthday.

Permanent link to this article: http://demystifyit.com/ignorance-truly-bliss/

Plug-in to the power of Big Data through API’s

BigDataA customer of mine has been doing some stunning leading edge things including: (1) leveraging their back-end SOA services, (2) exposing API’s to their Partners, and (3) integrating seamlessly with various social media sites. These initiatives are delivering real-world value to their customers, and in the process, they are capturing Data – Lots of Data. When I asked them what they intended to do with the data, they replied, “well, we aren’t really sure just yet, we’re thinking that we’ll just “Big Data It Later”.

While seemingly indifferent and haphazard, it’s actually an intelligent strategy. Creating a “Big Data Lake” to be dissected and analyzed later to find value and relevancy is a brilliant move, and often requires the skills and in-depth analysis from an experienced Data Scientist/Analyst. Turn these Analysts loose, and let them swim in the lake with snorkel and fins to find the sunken treasure of relevant insights. However, once those relevant insights are discovered – then what? Sure, it’s possible to slice and dice data through Visualization Tools to make more informed decisions about their business. But even greater value can be unlocked from binding those findings/insights into data/system streams through API’s. By so doing, it then becomes possible to leverage and analyze tons of data to act upon emerging trends well before they become apparent (if ever) to the average user. An example of this phenomenon include: factoring in the temperature, time of day, click stream analysis, client device (Mobile, Mac/Windows), Account Balance, and Social Media References, to present their customer with the best offer/up-sell opportunity. Think Amazon’s Recommendation Engine on Steroids!

Of course, exposing this extrapolated and synthesized data through API’s opens up a world of options regarding how you harness, expose and leverage information. API’s open up Mobile, along with “The Internet of Things” through expanded access (either as a contributing data source, or application access).  More examples of integration points for competitive advantage may include: rank ordering outbound call lists for inside sales, incorporating relationships within marketing campaigns to boost effectiveness, improving policy risk analysis, improving portfolio investment and hedging strategies, and multi-dimensional demand planning for manufacturing to name just a few. I’m sure that once you embark on the journey of asking “what if” you too will soon discover a treasure trove of endless possibilities to inject all sorts’ insights gleaned from Big Data through APi’s into your business.

Permanent link to this article: http://demystifyit.com/making-big-data-count/

5 Things You Should Know About OAuth

What is OAuth?

OAuth is an an emerging protocol for sharing information between applications without sharing passwords.  Chances are good that you’ve already used Oauth but may not have been aware of it.  (Palmolive – Madge, you’re soaking in it).   OAuth is favored by social media sites such as Facebook, Twitter and LinkedIn and the broad ecosystem of applications that enhance those experiences.  If you have ever allowed an application to access your Facebook data, OAuth is the protocol being leveraged behind the scenes to make that happen.

1. Should you even be concerned about OAuth?

The answer to this question is (drum roll please) it depends.  If your business wants to cash in on or interoperate with social media in a meaningful way, then you you should definitely read on.  If not, then thanks for reading this far, and take some time reading other articles on my site before departing.

2.  How is OAuth different from Basic authorization standards?  

In a word, passwords!  Basic authentication requires applications to store and transmit username and passwords to work.  For many use cases, this is just fine.  However, if your application interacts with other external web API’s then Basic Authentication is not advised for two reasons: 1. managing user name and passwords to access services is difficult and clumsy to manage and 2. the potential for security breaches.  On the other hand, OAuth only requires that a user merely grants access rights to your data without passing username and password information.  In this way, if you change your password, all your linked applications will continue to work.

3.  Is OAuth Safe?

It depends, OAuth can be just as safe as other authentication protocols, but you really need to know the spec, enforce and control access, and secure the communication channel.  The best and most secure method for utilizing OAuth is to use an API Server.

4. What makes OAuth so unique?

In nutshell, User Managed Access.  Basically, OAuth gives the application end-user the power to control whether to accept, or reject authorizations to share information or integrate to 3rd party systems, without passing user passwords.

5.  Where can you learn more about OAuth?

I suggest that you start at the source:  http://www.oauth.net   I have to add one caveat, with the case of this specification it’s important to note that newer or the latest spec doesn’t always mean better.  Since the spec is constantly evolving, new release could actually introduce unfavorable changes that you’ll need to stay on top of.  Another source is Vordel, we are helping many enterprise customers safely move to the API universe.

 

Permanent link to this article: http://demystifyit.com/5-things-you-should-know-about-oauth/

Can you trust you know who you are dealing with?

 

Like everything in life, knowing whom you are dealing with is essential.  I seriously doubt that I’m going out on a limb to say that no one likes dealing with a phony.  If you think about it, in the physical world, almost everything we do is based upon trust and relationships: friends, significant others and professional relationships.  Is your Doctor qualified?  Is your fiancée already married? Regardless of the relationship, it’s important for an individual to know whom they are dealing with.  The same is also true in business – does this company have the means to pay for the product we are shipping them?  Should we trust this importer?  Therefore, I believe that knowing the identity of whom you are interacting with in the digital world is just as important.  In fact, it may be even more important given the potential for rapid massive financial theft and sabotage.

To be clear, I’m defining a digital relationship as any electronic system that communicates information about you, your customers, your patients, your partners, including the ability to change, share or alter information on your behalf.  Nowadays, with the advent of social media these “digital relationships” are everywhere – Facebook, Linked-In, Twitter as well as other social applications.  The trend is to give these applications permission to share information amongst and between these new age applications, along with other more traditional applications such as e-mail accounts, contact lists, and more.  For example, grant Linked in access to your address book or e-mail account and they will search for new business contacts to link to.  The power of what I’ll refer to as “Cross Communication Applications” is unmistakable; they save time, and provide tremendous benefit to the end-user.

Even businesses are getting into the act, as they are now actively sharing enterprise information from cloud based applications such as Salesforce and Concur, with their on-premise back-office applications such as E-Business Suite, SAP, or other home-grown applications.  With all this sharing going on, it’s vitally important that everyone is certain of the identities exchanging information.  If a malevolent person or program were successful in impersonating your digital identity, the resulting damage from such a breach could be quite significant. Therefore, knowing that you are only sharing information with a trusted identity is critical.

Consumer-based cross communication likely poses less of a financial threat than do enterprise information or sharing – but ultimately, only you can be the judge of that.  Therefore, the more important the information to you, the more security measures you should take – defense in depth is truly your single best defense against malicious threats.

A Gateway is one of the most powerful tools available to stop would be posers from accessing your digital assets.  Since a Gateway reads and monitors all application traffic flowing into and out of Cross Communication Applications you can instruct the gateway to do a number of things, such as:

* Verifying that the incoming IP address matches the “white list” of trusted IP addresses
* Verifying that the IP traffic hasn’t been spoofed
* Insuring that incoming traffic does not contain Trojans or Known cyber attacks….and much more.

In summary, if securing your information matters, you should do a little research to determine whether or not a Gateway would be right for you.  Remember, we all really need to know who we are dealing with.

Permanent link to this article: http://demystifyit.com/can-you-trust-you-know-whom-you-are-dealing-with/

What is a Gateway, and what can it do for you?

Since I sell Gateways every day, I thought I would tackle this question. I’ll begin by stating that a Gateway is perhaps one of the most misunderstood and yet most powerful technology component available in a technologist’s arsenal. If you think of a Gateway only as an instrument to secure Web Services, a Gateway is considerably more versatile and is also extremely adept at handily solving a broad array of complex technical challenges. To use an analogy, don’t just think of a car, think Chitty, Chitty, Bang, Bang.

To understand what makes a Gateway such a powerful tool and to shed some light as to just how it works, it’s first helpful to know where it is most often installed within your technical architecture.  A Gateway is typically installed at the edge of your network inside what techies call the DMZ, which is an acronym that stands for Demilitarized Zone.  Techies use that term because it’s the frontline defense for all Internet traffic flowing into and out of your enterprise. As such, the Gateway operates as a High Performance Input/Output device that can apply operations to traffic in near real-time passing into and out of your network.  One of the most important functions a Gateway is its ability to stop bad traffic (schema bombss, SQL injections, etc.) from ever making its way into your enterprise.

Since application traffic flows through the Gateway on it’s way to your enterprise systems, the Gateway gets first crack at doing something meaningful with this information.  You can define a whole series of operations or policies that the Gateway can apply to this traffic. How you define your policies and the conditional instructions you define can completely alter your perception of just what a Gateway is and does. To use an analogy, if you saw the car from Chitty Chitty Bang Bang flying, you would think it was only an airplane, but if you instead saw the automobile racing across the water, you would think boat, and if you only saw the car on the road you would think of just as an automobile.   Now, if you were to combine all three perspectives, you would likely scratch your head and say to yourself, that’s one hell of a machine, whatever it is.  Well, unlike the movies, a Gateway is the very real equivalent of the Chitty Chitty Bang Bang automobile. Only, the Gateway can perform ten or more tasks extremely well instead of only three.

The operations a Gateway can perform are things such as:

 

  • Inspect
  • Verify
  • Transform
  • Redact
  • Enrich
  • Encrypt
  • Block
  • Route
  • Throttle
  • Analyze
  • Log
  • Report

 

Each operation, when performed stand alone or combined with other operations can quite literally change your perception of the technology. A Gateway can:

  • Prevent unauthorized application access into your network
  • Thwart Denial of Service Attacks
  • Integrate On-Premise with cloud based applications across your entire enterprise
  • Operate as a Cloud Service Broker
  • Serve as a unified policy enforcement point – enforce IdM entitlements
  • Provide federated access
  • Re-purpose web services by redacting responses
  • Provide real-time insight as to how all your composite applications are performing
  • Transform application data from one language to another, and back again (SOAP to REST) – Go Mobile Quickly, without added time or expense
  • Throttle certain network traffic to meet SLA requirements
  • Serve as a simple Enterprise Service Bus (ESB) or front-end an existing ESB to improve its performance by as much as 8X
  • Send alerts to management and much, more…

In short, a Gateway is a very powerful tool that can solve numerous complex technical challenges and should be a core component of your infrastructure.  And of coure, the most powerful, flexible, and easy to use Gateway on the market is hands down Vordel.

Permanent link to this article: http://demystifyit.com/what-is-a-gateway-and-what-can-it-do-for-you/

Taming the SharePoint Beast

SharePoint is one of the most pervasively used technologies to come along since Microsoft Office. Once installed, SharePoint has the tendency to spread like a weed, often popping up in uncontrolled ways throughout the enterprise. Users simply love the freedom and autonomy of the tool, which is why CIO’s and CSO’s simply pull their hair out over the difficulty of managing and securing information contained therein.

For the record, I’m a big fan of autonomy – but then again, who isn’t? The trouble or threat really starts when someone stores sensitive information within SharePoint – which let’s face it, is going to happen often. That’s where the fun and games stop, and the need for enterprise class security begins. After all, proprietary information such as R&D, Financial Information, Strategy Documents, Market Analysis, Engineering Blueprints, etc. needs to be safeguarded, and as such, should adhere to the same security controls applied to other corporate information systems. So, if your company standardized on Oracle Access Manager, CA Siteminder, RSA Access Manager or IBM Tivoli Access Manager, etc. then you will most definitely want to leverage those systems with SharePoint. The trouble is, Microsoft’s approach to IdM is akin to Ford’s approach to Model T colors (you can have any car color you want as long as it’s black) – you can use SharePoint with IdM as long as you use Microsoft’s Identity Management products. Given the limited capabilities of Microsoft’s Identity Management Offering, this is neither a practical nor viable solution. So what should you do?

Fortunately, there is a seamless and elegant way to quickly and easily leverage your existing IdM infrastructure with SharePoint. By introducing a Gateway into your infrastructure you can close the door to potential threats and leverage your existing infrastructure. But here’s some other really great news.  First, you don’t have to install software everywhere.  Second,  you will gain insights about SharePoint (uptime, latency, & performance) that you can’t possibly have today.  And third, SharePoint application performance will greatly improve.

So, if you’ve been struggling to solve this problem for a while, I bet everything I just stated will sound like magic.  But once you understand the mechanic’s of what the Gateway is actually doing – the behind the scenes Gateway operations that makes it all happen – it will make perfect sense to you. A word of caution though, not all Gateway’s are created equal, and a number of Gateway vendor’s engines simply aren’t equipped to tackle this problem as efficiently as that of Vordel. I’m not saying that it can’t be done, but to use an analogy, the difference between the level of effort required is likely to be the same as the difference between planting a flower and planting a 50 foot grown tree. Both CAN be done, but it definitely will take much longer and you’ll have to commit a lot more resources to plant the tree.

I hope you enjoyed this article, and look forward to your feedback.

Permanent link to this article: http://demystifyit.com/taming-the-sharepoint-beast/

Older posts «