One of the big unreasonable fears expressed by Information Technology (IT) professionals in the early days of cloud computing was the question of how they could possibly manage servers that were located in some distant data center.
Unreasonable because, with a secure internet connection, it hardly mattered where the server was located. They would connect the same console software to the server and do the same things they would do were the server right there in the room with them.
Shared Pool of Resources
Over time those technologists became comfortable with the idea of dynamically sharing a pool of server, storage, and other resources housed in a data center somewhere beyond their own four walls.
The concept of sharing a pool of resources that can be quickly requested and just as quickly released is a core component of the definition of cloud computing. Memory, processor power and more can be accessed by any user via a self-service portal, and returned to the pool when no longer needed. Sharing pooled resources instead of over-provisioning dedicated resources to each user creates tremendous economies that form the foundation of the cloud value proposition.
Sharing Support as a Pooled Resource
When you look at IT support there are really two separate strategies required.
Core – The first is the network “core” support strategy. How to maintain the servers, the storage, the routers and switches, and the rest of the central infrastructure that runs the network. With cloud computing, responsibility for core support transfers to the cloud service provider. The cost of this support is folded into the fee you pay for the service.
Edge – The second strategy is somewhat trickier because it involves what those IT professionals like to refer to as the most difficult part of a network to manage, the segment between the keyboard and the back of the chair, the user.
Users require support whenever something doesn’t perform as expected. Whether due to a malfunction, or an incorrect expectation, the user experiences a lack of certainty as to how to proceed. The prudent next step is to request support.
This used to be one of those areas in which larger corporations had a substantial advantage because they could justify the expense of staffing their own Help Desk to provide needed user support.
However, many midmarket and smaller companies have realized the same results by simply sharing from a pool of support resources, a Virtual IT Department!
A well-designed Virtual IT Department achieves maximum economies by layering multiple strategies into place to provide lowest-cost support wherever possible.
- They examine which questions are asked most often and provide answers to these on a Self-Support Portal where users can access the answers instantly without waiting for a person to respond.
- If the user’s question cannot be answered by the self-support portal it is routed immediately to one of a team of support generalists who can either answer it or route it to the appropriate specialist for reply.
- If the issue is being caused by a mis-configuration or other technical flaw, the specialists can reach in with online tools to resolve it remotely.
- If similar issues are coming in from multiple users, the support software can correlate all the requests to help with root-cause determination.
- If the root-cause is a physical problem with a piece of equipment or connecting cables, a field technician can be dispatched to the site where the equipment resides so they can correct it swiftly. In the meantime, the support team can be notifying all users of temporary workarounds as necessary.
The Flexible Support Solution
As with all virtual solutions, a new degree of flexibility is introduced that can significantly improve the speed and quality of support service delivery. Customized support can be added for line-of-business applications specific to a given customer set simply by training specialists on those platforms. Alerts, advisories, notifications and other communications are highly facilitated by direct access to the network.
The key to establishing or accessing a successful Virtual IT Department is in the development of an appropriate and effective strategy. Talk to your CloudStrategies Advisor about your virtual support options!
Trust. It’s perhaps the main element in any decision you make regarding computer & communication services for your company and yourself. You need to feel you can trust your provider to keep your data secure, your personal information private, and your communications protected from eavesdroppers.
Millions of people trust services like Microsoft Office 365 with their most prevalent communications, including email using Exchange Online and instant messaging, voice and video over Skype and Skype for Business (formerly Lync). While it is likely that they implicitly trust these services because they are provided by Microsoft, the world’s largest software company, you should stop to ask what it actually is that Microsoft is doing to earn this trust. Yes, they have vast resources, but what are they doing with them?
A post on the Office Blogs from the Office 365 Team answers this question very thoroughly. “From Inside the Cloud: What does Microsoft do to prepare for emerging security threats to Office 365?” introduces us to Chang Kawaguchi, a group engineering manager for security for Office 365, Travis Rhodes lead security software engineer for Office 365 and Vijay Kumar, a senior product manager for Office 365. These are three of the people who spearhead Microsoft’s strategy for keeping Office 365 and Microsoft Azure cloud services secure and trustworthy.
The post features an excellent short video that describes several of the security strategies employed by the group, beginning with one that would seem to just be common sense: Assume people are trying to break into your network and data at all times. Constant vigilance. Oddly, most people seem to assume that nobody would ever bother attacking them. Microsoft invests heavily in an “Assume Breach” approach which causes them to constantly be on the lookout for new threats.
Assuring viewers that no customer data is ever threatened or even touched in their work, the video describes the work of the “Red” and “Blue” teams constantly “at war” with each other to battle-test the armor that protects these systems.
The Red Team, “an internal dedicated team of “white hat” hackers from varied industry backgrounds such as broader technology industry, defense and government,” constantly conduct penetration testing on Microsoft’s systems. Counterbalancing them is the Blue Team, “whose role it is to monitor activities within the system to detect anomalous behavior and take action. As hard as the Red team is trying to find and exploit vulnerabilities the Blue team is trying to detect, investigate and mitigate security events.”
As the post concludes, “The combined efforts of our teams go toward improving detection by evolving our machine learning algorithms for the detection of anomalous activity as well as incident response.”
Any IT manager responsible for system security will find valuable insight in this post and the included video. Those wishing to continue to learn more should regularly visit the Red team blog. If you have any questions about anything you read, please reach out to your CloudStrategies Advisor for more information!
Extended support for Windows Server 2003 will be withdrawn on July 14, 2015. After that date there will be no more patches, updates, or security updates for that old version. If you’re still running Windows Server 2003 it is now critical to start planning to move off of it and onto a more modern platform. With most upgrades there is usually a single path, only one way to go. This time, for the first time, you have some choices available to you!!
Even If you’re NOT Running Windows Server 2003
Whether you use Windows Server 2003, 2008, or any of the other versions released over the past decade now may be the time to make a change, especially if you want to save money, reduce support costs, and eliminate headaches.
One of your choices is, of course, the latest version, Windows Server 2012 R2. With over 300 features that didn’t even exist back in 2003 this is a great choice for those who wish to keep running and maintaining their own servers on their own premises. New Microsoft CEO Satya Nadella refers to Windows Server 2012 R2 as the “Cloud OS” because you can use it to enable and enjoy all the advantages of private cloud computing.
For this migration, for the first time, you have flexible choices regarding how your future state environment will look and function. While many think it’s a matter of choosing either on-premise Windows Server 2012 R2 or Microsoft Azure cloud services, it’s really more of a matter of how you take advantage of the hybrid cloud opportunity to combine both.
Transforming Your Data Center
According to Microsoft’s marketing, “Azure is an open and flexible cloud platform that enables you to quickly build, deploy and manage applications across a global network of Microsoft-managed datacenters. You can build applications using any language, tool or framework. And you can integrate your public cloud applications with your existing IT environment.”
It is this last point that is most important to anyone transitioning away from Windows Server 2003. Since Microsoft Active Directory can span both on-premise and cloud-based servers, it becomes easy to maintain one database of security and access rights and approach the combination as a single entity. This, in essence, creates a “Datacenter without boundaries” in which you can burst beyond the capacity of your local service to the highly-elastic resources of Azure. Azure also provides complete data center redundancy, a level of resilience that would cost far more if you did it yourself, which is just one illustration of the cost-effectiveness of the Azure solution. Speed and high security make it a highly desirable place to migrate your workloads.
But which workloads? Which should go to the Azure cloud and which to your local on-premise Windows Server 2012 R2 units?
The Migration Process – Microsoft Best Practices
Microsoft recommends a simple, yet elegant, four-step migration process:
- Discover – Catalog your software and workloads
- Assess – Categorize applications and workloads
- Target – Identify the destination(s) for each of your workloads
- Migrate – Make the actual move
A potential fifth step would be the ongoing management of your new environment to constantly assure optimum performance.
Not a surprise, this closely resembles the CloudStrategies process framework of “Discover, Adopt & Manage” in which we guide clients to discover all of the assets in their IT environment, adopt new cloud structures to accommodate each, and manage all of it with newfound ease and facility.
Turn to CloudStrategies for the experience and the expertise required to perform all of these for you. Our experience performing many such migrations benefits your project, and you will find it far less costly than training your own people to do something they will only do once.
Discovery tends to become more extensive, and more tedious, than most anticipate it will be, but it’s crucial to be as comprehensive as possible. Missed applications and workloads can become headaches later on. Once the entire inventory has been documented, it is important to assess the applications, the workflow related to each data entity, and potential impacts upon users from various scenarios.
The four likeliest targets for your workloads are:
- Windows Server 2012 R2 Server running on your premises
- Microsoft Azure
- A Cloud OS Network, likely running on your premises
- Office 365
Obviously, productivity and communication related activities will likely be migrated to Office 365. This may include email moving to Microsoft Exchange Online, document management moving to SharePoint Online, and instant messaging, voice, video, and shared application communications moving to Lync Online.
Choosing between the other three targets will be determine by factors including speed, ease of migration, cost, and desired functionality. One good example would be websites, which would be better served by the speed available from the Azure data centers, as well as the elasticity of the storage, processing power and memory which could all contribute to keeping sites responsive even during times of peak demand.
What’s YOUR Cloud Migration Strategy?
Whether migrating to cloud for the first time, migrating from an expiring platform to a new one, or migrating from one cloud service to another, turn to CloudStrategies to provide the guidance, the advice, and the assistance you need to keep your migration flawless. Contact your CloudStrategies Advisor today to learn more.
One of the great axioms of the service industry is that the difference between a great service company and a bad one is that the great service company knows its costs.
A simple statement, yes, but with incredible implications for customers. The great service company that knows its costs can reduce them faster, and pass that savings along to customers. A great service company that knows its costs knows that it costs more to remedy a problem than it does when there are no problems. This leads them to one inescapable conclusion:
It Costs Less to Prevent Problems than to Fix Them
Think about the service companies you use that include preventative maintenance in their contract. That’s not just for your benefit, it helps them keep costs down too! It’s also the driving force behind health insurance wellness programs. A healthy patient costs less than one who becomes ill, so keep them from becoming ill.
Prefer the Proactive Managed Services Provider
Most service companies define their Service Level Agreements (SLA) in one of two ways.
Some define the Maximum Response Time and the Maximum Resolution Time. The first refers to how long you’ll have to wait at most for someone from the service company to respond to your request for service. This is usually anywhere from two to four hours, though some provide a less expensive plan that assures a response within one business day. Resolution time is the time it takes to actually restore your service to full functionality.
Others prefer to guarantee uptime, or what is often referred to as Quality of Service (QoS) measured as availability. For example, many high-quality services will guarantee that your service will be available for use 99.999% of the time, usually referred to simply as “five nines.” Other services assure three nines or fewer. Many offer penalty repayment back to the customer if they fail to fulfill their committed QoS.
For this latter group, assuring avoidance of such a slender window of downtime requires that they take steps in advance to assure continuity. They may test circuits more frequently. They may implement redundant connections and systems to failover in the event of an outage. They must be proactive about making sure their network keeps working, because it will often take more time to restore lost function than they are allowed under their own SLA.
An Ounce of Proactivity Saves a Pound of Disruption
When your managed services provider (MSP) take the proactive stance of interrogating your network performance reports regularly, they can spot anomalies that, if left alone, will turn into outages eventually.
The proactive MSP will take immediate steps to remedy the anomaly well in advance of the outage, preventing it from causing any disruption to your workflow. How much is that worth?
The proactive MSP will ask you many questions trying to learn more about how your business operates so they can anticipate things that might cause issues later and make provisions for them in advance.
The proactive MSP will establish testing cycles with you. Receiving a report that last night’s backup went off without a hitch feels great. Not so great when you need to restore that backup and it doesn’t restore. Frequent restoration testing is just one of the many subsystems in any on-prem, cloud, or hybrid environment that should be taken offline and tested regularly when it will not disrupt work.
Your Proactive Cloud Strategies
When considering or evaluating MSPs to choose one to support your cloud environment, ask about the proactive and preventative measures they take to prevent outages instead of having to remedy them. The one who replies most passionately about proactivity is your preferred provider.
We’re a little more than a month into turning on Office 365 Multi-Factor Authentication (MFA) for everyone at CloudStrategies. My aim here is to share some thoughts and observations around the experience of using the technology across all my various devices. Is MFA a great way to secure our Office 365 tenant or a productivity buzz-kill? Within the first few days – I would have said a definite yes to both those questions. After a little more time using it every day, I still believe in the security benefits, but have warmed up enough to feel a little less productivity challenged. More than that, I feel comfortable that I’m taking reasonable and prudent measures to protect access to our systems and data while leveraging the investments we’ve already made in Office 365.
So – let’s start with level setting on what MFA is, and why I believe more and more businesses are going to deploy it sooner than later. Frequently referred to as 2-factor authentication, MFA is technology that requires that a user not only have a username and password to access technology platforms, but instead also prove that they possess something as an additional level of security before accessing systems. The classic example that’s in everyone’s wallet is a debit card. The card without the pin isn’t useful, and the pin without the card doesn’t get you money from an ATM either.
Years ago I carried an RSA SecureID token that had a rotating number on a screen that I needed to have with me at all times to access corporate platforms. The geek in me thought it was cool to carry with me on my key-chain – but the user in me quickly found it difficult to have to sign-in to a VPN before I could do any work from outside the office. Though it may have been subtle, it definitely was enough of a pain that I wouldn’t bother signing in for anything other than a very specific purpose or goal – thus discouraging me from doing as much work as I otherwise might have from outside of the office.
Today, with Microsoft’s implementation of MFA for Office 365, I have a similar feeling of security as I did with my RSA ID, but yet, for my main devices and applications, I also have a sort of “fast pass” that makes the productivity hit much more manageable.
There are two core components of MFA that end users will learn to manage. The first is very much like the RSA experience – though it primarily is delivered through an App on the end users cell phone. The second is called an App Password and can be used as a one-time code for any application that needs to access an Office 365 resource on a regular basis (in the background) – such as email clients, OneNote, calendar applications, cell phones, etc. Let’s talk about the experience of each of these parts of MFA:
For the first part, any time a user needs to access any Office 365 resource through a web browser – whether on their own device – or on a public device, they will start by signing in normally with their username and password. After doing so – instead of immediately gaining access their account, they will be prompted to provide a second level of authentication. For this, there are a few choices. The one I’ve been using has been to be prompted for a 6 digit number that I can only retrieve by launching a simple app on my mobile phone. When prompted for the code, I simply pull out my phone, launch the app, and wait for it to provide me with the number. The number is continuously changing – every 30 seconds or so, so you can never predict what it is and need to type in the number within a given time period. This works exactly like my old RSA token did – perhaps with one benefit in that when I’m home I find that my phone isn’t ever very far away from me – as opposed to where I kept my keys and RSA token – so I’d need to run to the other side of the house to retrieve it.
For all non-browser based access to Office 365 applications, a user’s regular password will no longer be enough to access the system. Because applications like Outlook, Office applications, mobile phone apps, etc. do not have a mechanism to support the entry of an Authentication Code, they will instead leverage a uniquely generated “App Password”. Office 365 can generate up to 40 unique 16 digit App Passwords that can be used for individual applications or devices. App Passwords, once generated, can never be displayed a second time. They are entered and stored in individual applications on a per device basis and once entered, applications function normally – without the need for a MFA Authentication Code. The security strength of App Passwords comes from the fact that they can be deleted at any time. The productivity benefit of an App Password comes from the fact that once entered, those applications no longer need to have a password entered for recurring access to Office 365. In the event of a breach, and once an App Password is deleted from the Office 365 console, any apps that have stored that password will no longer be able to access Office 365. Think about a scenario where a device is lost or stolen – a simple action of deleting the App Password will nullify that devices ability to provide any access to anything that shouldn’t be accessed.
Security in our lives always comes at a cost – frequently restricting access or limiting our capabilities. Microsoft’s Office 365 MFA solution provides an increased level of protection with a reasonable approach to securing systems and data. Any productivity hit is likely short lived for most users and the comfort that businesses can receive knowing that users data won’t be easily be compromised through the loss of a device or the inadvertent compromise of an individual’s password.
How’s this for an IT Manager’s nightmare? Your company today announced that it had acquired its largest competitor. Great news!!! You’ve just been informed that you need to double the capacity of your data center… by tomorrow.
Put the defibrillator back in the case on the wall and relax. This will be no problem for you. In fact, your biggest challenge will be getting the new company to give you the new workloads that need to be accommodated by your instantly expanded data center. It’s a snap. It’s a breeze.
Your Data Center Away from Home
No, you won’t have to find a supplier who will ship dozens of new servers to you immediately, nor recruit a team of bug-eyed techies to stand them all up overnight. In fact, very little coffee will be required to accomplish this feat.
Microsoft Azure lets you accomplish what may be the ideal example of the hybrid cloud in action. However many or few host servers you may be managing in your own data center you simply provision new enterprise-grade virtual machines on Azure as you need them. You can readily bring over your existing virtual machines or create new ones, each pre-populated with your choice of operating system and the enterprise apps you need. You run these on the Azure Virtual Network, an isolated environment where you control DNS, subnets, firewall policies, private IP addresses and more.
Workloads are by no means limited to Microsoft platforms. You can run Windows or Linux, and enterprise apps such as SAP, Oracle, SQL, and Hadoop on Azure VMs.
Make the Connection and Manage It All As One!
Connect your on-premise data center to your Azure data center as easily as connecting a branch office using the Azure Virtual Network and ExpressRoute over either a secure VPN or private connection. You control all the networking and security parameters on Azure with the same tools as you do your own data center. It all feels like one thing. It’s all managed as one.
No need for additional Active Directory structures, either. With Active Directory for Windows Server 2012 R2 and Active Directory for Azure you bring it all together in one forest.
It’s Not Just IaaS, it’s PaaS too!
Microsoft technology meets the multi-platform world on Azure. You can develop and deploy modern applications that run on Android, iOS, and Windows which take fullest possible advantage of cloud technology. You get some spectacular SQL and NoSQL data services, too, which give you deep insights into your data. This is a cloud-based developers platform with serious horsepower.
And it SCALES!
Back to our original concern, growing your data center rapidly. Need more VMs? Just provision them. Need more storage, processing power, memory or other resources. Available upon demand.
Of course, you won’t be worried about establishing redundancy to assure business continuity or support disaster recovery. With hundreds of data centers located in 17 different regions around the world, and with both Locally Redundant and Geo Redundant storage to serve your needs no matter what, Microsoft has that covered!
Time to Talk about Your Data Center in the Cloud Strategy!
Your CloudStrategies Advisor will take you through the process of migrating your workloads and applications to Azure, giving you greater scalability, sustainability, and system certainty than ever before. Start with our Assessment program to determine just how much IT budget you can save, and just how far you can grow with Azure.