Welcome to the Sentinel Blog!
We are proud to feature a carefully curated collection of articles and other content related to the most important technology topics of today and beyond. Our posts are composed and edited by Sentinel’s ALWAYS ENGAGED team of solutions architects, engineers, project managers and other subject matter experts.
Cisco Live 2019: The Major Announcements
There are a number of reasons to attend the Cisco Live! conference every year. The keynote speeches, the panel discussions, the hands-on training sessions, and the opportunities to meet Cisco executives are only part of what this annual five-day event has to offer. Beyond expanding your knowledge base and learning more about IT industry trends, areas like the World of Solutions places the spotlight on Cisco’s many partners showcasing their own unique solutions and services. Having so many different IT executives, experts, and customers gathered in a single location also creates a great atmosphere for networking.
Sentinel was proud to have a strong presence at Cisco Live! 2019, as several members of our team flew out to San Diego for the conference. Some went to explore the many different panels and sessions, while others spent a majority of their time engaging with IT professionals at the Sentinel booth and at the hour happy hour event we hosted. You can read a little about both types of experiences and the key takeaways from each in last week’s blog. This week, we wanted to share some additional details surrounding the major announcements that were made by Cisco at this year’s event.
IoT AR and VR Are Changing Networks
New and emerging technologies, including Internet of Things (IoT) devices, blockchain, augmented reality (AR) and virtual reality (VR), are quickly being adopted by consumers and integrated into their daily lives. Many organizations are working to adapt and keep up with these trends, but are finding that their networks aren’t exactly equipped to handle the heavy demands created by these advanced technologies. As a result, significant network upgrades and refreshes are becoming increasingly common so businesses can ensure they remain prepared for anything and everything that lies ahead.
The Continued Growth of Multicloud
Multicloud is an increasingly popular strategy in which an organization uses different cloud platforms (public, private, hybrid) and providers to meet specific application and workload requirements in an effort to streamline operations and meet business goals. As the number of users and devices connecting to corporate networks continues to skyrocket, multicloud offers greater flexibility, connectivity, efficiency, and customization compared to a more traditional cloud setup. There are also security benefits to having different portions of your environment spread out across multiple clouds and vendors. Cisco wants to help businesses get the most from their multicloud strategy, and have announced they are developing advanced data analytics tools for greater insights and overall management of assets. Sentinel will be hosting a multicloud event in July if you’re interested in learning more!
Advanced Machine Learning Integration
Cisco announced new software enhancements designed to use machine learning and artificial intelligence (AI) to analyze network data and deliver more valuable insights that will accelerate business and application development. The basic idea is that machine learning collects relevant data from local networks and combines it with aggregated data to create a unique network baseline able to grow and evolve as users, devices, and applications are added. Machine learning on the network will also be able to spot potential issues or threats and alert the proper IT personnel so they can take care of it before things take a turn for the worse.
SD-Access Combines with SD-WAN and Application Centric Infrastructure (ACI)
As Sentinel’s own Matt LaSota said in last week’s summary blog, “The future is seamless integration and automation between SD-Access, SD-WAN, and ACI platforms to deliver an end-to-end network experience, and it’s right on the horizon.” Cisco’s ultimate goal is to make it easier for enterprise IT teams to securely add users and devices to the data center and cloud networks from any branch location. They’ve also managed to improve the user experience by ensuring all application requirements are automatically shared between the data center and WAN. Lastly, Cisco has extended its encrypted traffic threat detection into public clouds, to help make those environments even more secure.
Overall it was another fascinating and fun Cisco Live!, and we’re excited to share these new innovations with our customers so they can remain ahead of the curve. If you would like additional information about any of the announcements or solutions detailed above, please contact us.
Cisco Live 2019: Top Takeaways from Sentinel Staff
Cisco Live! is Cisco’s annual conference for their partners and customers that focuses on technology trends, education, thought leadership, and networking. The primary goal of the event is to provide inspiration and showcase innovation as technology continues to evolve at an incredibly fast rate.
This year’s Cisco Live conference took place from June 9-13 at the San Diego Convention Center in California. Several members of the Sentinel team were in attendance, eager to connect with customers and learn more about the solutions set to transform the technology landscape in the coming months.
As the conference wrapped up, we asked some members of our staff to share the major takeaways or themes from the speeches, seminars and/or networking events they attended. Here are their insights:
Matt LaSota - Sr. Director of Sentinel’s Enterprise Support Services, Network, and CloudSelect
This year’s Cisco Live! focused largely on software-defined, security, and automation solutions. At least those were the topics that most interested me. The keynote made it clear that software-defined is at the heart of what Cisco is doing. They highlighted new innovations and integrations between the current platforms, with particular emphasis on SD-Access, SD-WAN, and ACI. The future is seamless integration and automation between those platforms to deliver an end-to-end network experience, and it’s right on the horizon.
The sessions specifically on SD-WAN and automation were most important for me this year. Even taking products that have been around for a long time, and a part of the regular line card like ASAv and NGFWv and scripting and automating their deployment, configuration, and management were great to see, and very much aligned with many of things we do today in CloudSelect.
A special highlight this year was an invitation to the NetVets CCIE/DE Lunch with Cisco CEO Chuck Robbins. Only a small group of people were given the opportunity to participate in Q&A with Chuck and his leadership team. No topic was off limits, and the discussion ranged from the future direction of products to certifications to recommendations and feedback around experiences with Cisco TAC (Technical Assistance Center).
Overall, this year’s Cisco Live! conference was top notch! I learned a lot from the educational seminars and discussion panels, but also had plenty of fun at social and networking events.
Chris Vasquez – Sentinel Sr. Sales Executive
Cisco Live! can be a really interesting and engaging experience, even if you aren’t watching keynote speeches and attending sessions to learn more about certain technologies. Beyond those things, there’s a sprawling convention floor to explore, where all different types of companies offer all different types of solutions. Many of them try to hook you in with appealing visual displays and plenty of giveaways. If you forgot to pack a spare set of socks, for example, there are probably a dozen or more booths giving away pairs for free, branded with their particular company logos of course.
Most of my time at Cisco Live! was spent at Sentinel’s booth on the convention floor, where we too had shirts and mints to hand out if people asked nicely. Along the way I had the chance to talk with a number of different people, from other Cisco partners to potential future customers. While there wasn’t a specific topic or solution that everyone seemed to be talking about, many were interested in learning more about our service offerings. They seemed to feel pretty confident and comfortable with their hardware, software, and cloud technologies, it’s just become tough for them to keep all of it fully secure, optimized, and up to date. This is where things like Sentinel’s Managed Services, NOC monitoring, SOC monitoring, and SIEM can really shine, because we handle all of the day-to-day operations along with regular maintenance and support, so our customers can focus more on their own goals and growth for the future.
It was a great time meeting so many new people, telling them a bit about Sentinel, and generally seeing what other partners are up to. Sentinel held a happy hour event one evening at a nearby bar after the conference that was a whole lot of fun, and Cisco also put together a wild concert with Foo Fighters and Weezer so conference attendees could let off a little steam. I’m hoping I can go back again next year!
If you are interested in learning more about any of the solutions and services outlined above, or are curious about some of the major announcements made at this year’s Cisco Live! please contact us for additional information.
Is Hyperconvergence Right for Your Organization?
By Geoff Woodhouse, Sentinel Solutions Architect
There’s been a lot of interest lately in hyperconverged infrastructure (HCI) solutions, and as a result plenty of big names like Dell Technologies and Commvault have introduced new offerings such as PowerProtect and HyperScale to help satisfy security and backup needs across all types of hyperconverged environments. Of course there’s also Cisco Hyperflex and Dell Technologies’ VxRail, both of which can run your production environment. Even though the market for HCI solutions continues to expand, it’s important to note that hyperconverged isn’t for everyone.
We’ve spoken with customers who have complained that HCI has added complexity to their environment and left them feeling ill-equipped to handle day-to-day operations. One of the biggest benefits of hyperconverged is that it places everything into one appliance for easier management, but that also makes it much harder to debug if something goes wrong. So some people want to keep their servers, storage, and network separate so they can make size adjustments to the individual parts as needed when experiencing growth or acquiring other businesses. Additionally, some organizations have designated server administrators, storage administrators, and network administrators. When you place all three of those pieces into an HCI platform, who does what? It’s all in one pool, so that can be a major challenge on the administration side of things.
Your purchasing cycle can also make it difficult to invest in an HCI platform. Many organizations spread out their technology purchases over three to five years, so one year they’ll refresh their network, the next year they’ll buy servers, and the year after that they’ll buy storage. As pieces start to become older or outdated they’ll create plans to replace them. With hyperconverged however, you have to purchase everything all at once. So you may have just bought new servers last year, but if you decide to invest in a new HCI solution then the servers are going to be replaced as part of the all-in-one package. Your overall refresh cycle needs to change to accommodate the new structure of your environment. Instead of laying out a three to five year plan, you have to budget differently, which can make things easier and more predictable financially but more difficult politically as IT managers negotiate their yearly funding.
So how do you know if hyperconverged infrastructure is right for your organization? If you’re operating from a smaller environment with about 5TB or data or less, I would recommend keeping all of your infrastructure pieces separate. The smallest HCI systems come with around 15-20TB of space, so you’d be overbuying which doesn’t make any sense. If you have anywhere between 20-50TB of data, that’s a sweet spot for HCI. If you’re operating a large environment with around 100TB of data however, you need to think carefully about whether HCI really is the best solution for your organization. It would be very expensive because you’d need to buy a lot of compute and memory, but you might be able to make it work. I’d also advise checking into other solutions too, because with such a large amount of data investing in a separate storage area network (SAN) would keep that data separate so it can be managed better than having it all in one platform where it might be too much.
A lot of the marketing around HCI will tell you it’s the best solution for every type of organization, but at Sentinel we’re focused on providing our customers with the right technology to meet their unique needs. If your organization uses 20-50TB of storage and 3-4 servers, I’d say that investing in HCI would be a smart choice about 80% of the time. For everyone else, especially smaller and larger companies that fall outside of those parameters, I’d strongly encourage you to investigate other options for enhancing your environment. If you are interested in learning more about hyperconverged infrastructure or other any other innovative technology for your servers, storage, and network, please contact us for additional information.
Time to Upgrade Your Microsoft Windows Server 2008 and SQL Server 2008
Back in 2014, Microsoft announced it would be ending mainstream support for Windows Server 2008 and SQL Server 2008 products. At the time, Microsoft strongly encouraged customers to immediately upgrade to newer versions, or at the very least develop a strategy to quickly phase out the 2008 versions. Thankfully they also understood that these sorts of things take time, and promised that both products would continue to receive extended technical support such as security updates for another five years.
As of July 9, 2019, Microsoft will officially end the extended support period for SQL Server 2008. Extended support for Windows Server 2008 will end a few months later on January 14, 2020. Despite ample advance warning, many organizations still haven’t upgraded their systems in advance of this deadline. If your company falls into that category, here’s what you need to know:
The end of Microsoft extended support also means an end to critical security updates. Failure to upgrade would create vulnerabilities that could be exploited by cyber criminals to gain access to your network and environment. The absence of regular security bulletins also makes it impossible to fully protect against hackers and malware.
Continuing to use SQL Server 2008 and/or Windows Server 2008 after extended support expires may place your organization out of compliance and industry regulations such as GDPR. This can lead to significant financial penalties as well as damage to your corporate reputation.
Attempting to maintain and support portions of your environment such as legacy servers, firewalls, intrusion systems, and other areas on your own tends to require a substantial investment of time and money, both of which would be saved with an upgrade.
An upgrade from Windows Server 2008 and SQL Server 2008 creates an opportunity to extend the digital transformation of your business further into the cloud. This can help create a competitive advantage in your industry, improve the overall user and customer experience, increase productivity, and generate additional revenue. If that wasn’t enough, the scalability, compatibility, and security offered by the cloud makes an upgrade easy to integrate into your environment, so you can worry less and focus more on other important tasks. Meet compliance and data regulation standards as part of a reliable, high performance platform ready to handle your business needs.
While there are a number of different ways that your organization can approach an upgrade from Windows Server 2008 and SQL Server 2008, Sentinel wants to make the process as easy and painless as possible. We offer special Jumpstart packages for each product so everything from initial system assessments to the development of migration strategies to complete deployment that includes installation and data transfer are handled in an efficient, professional, and cost-effective fashion.
If you are interested in learning more about the end of extended support for Microsoft Windows Server 2008 and SQL Server 2008, as well as the Jumpstart packages Sentinel offers to help your organization upgrade, please contact us.
The Anatomy of a Comprehensive Disaster Recovery Plan
by Dr. Mike Strnad, Sentinel Strategic Business Advisor
Cyber-attacks are becoming more frequent, more sophisticated, and can have devastating consequences on businesses. Determined hackers have proven that with enough commitment, planning, and persistence they will inevitably find a way to access your sensitive corporate data. It is not enough for organizations to merely defend themselves against cybersecurity threats. They need to take proactive measures by developing cyber incident response plans or updating existing disaster recovery plans in order to quickly mitigate the effects of a cyber-attack and/or prevent and remediate a data breach. Small businesses tend to be the most vulnerable, as they are often unable to dedicate the necessary resources to protect themselves. Some studies have found that nearly 60% of small businesses close within six months following a cyber-attack. Today, risk management requires that you plan ahead to prepare, protect, and recover from a cyber-attack.
Disaster Recovery Institute International (DRII) and Business Continuity International (BCI), along with ISO 22701 provide guidance and structure when creating Business Continuity Plans. There are three types of recovery plans built into the structure of traditional Business Continuity: Infrastructure Recovery, Application Recovery, and Disaster Recovery. All three have a specific purpose and form a strategic approach as an organization transitions from the Continuity Phase to the Recovery Phase. These plans should be incorporated with a solid infrastructure defense by using such appliances as IDS/IPS, a well-defined Security Operations Center (SOC), and a highly knowledgeable security monitoring staff.
No organization is immune. The world is unpredictable, and disaster could strike at any time. You buy insurance to protect your business financially against losses, but insurance cannot replace valuable data and the key applications that make your business work. To protect these items you must plan ahead, creating a plan to restore your data when it is lost. Here are five dangerous situations that could significantly impact your business:
1. Natural Disasters - Mother Nature can be cruel. Storms, fires, and floods can all do irreparable damage to your business. Without a disaster recovery plan in place, you may find it extremely difficult to resume operations, putting the future of your company in jeopardy. Many studies have shown that over eighty percent of companies that close for more than 5 days never reopen, so getting back on your feet is critical in the event of a natural disaster.
2. Hardware Failures - Whether from a power surge or other cause, if your hardware fails it can take all your data with it. While you can take steps to protect your hardware with cooling systems, power surge protectors and other technology, it is essential to regularly back up your data. Using a cloud-based or off-site storage can add additional protections, as it is unlikely both locations will fail at the same time. Your disaster recovery plan should include these steps to ward off any potential data loss that could occur.
3. Human Errors - No one is perfect, and that includes you and your employees. Forgetting to save changes, accidentally deleting an important document, or flipping the wrong switch could lead to a significant loss for your company. Training programs can help reduce errors, but the only way to keep your business truly safe from a data loss due to human error is to back up your data on a regular basis.
4. Cybercrimes - Unfortunately, cybercrimes are on the rise and most businesses are affected at some point. A virus or ransomware attack could hold your data hostage, grinding your business to a halt and causing massive profit losses. Your disaster recovery plan should include steps to recover from a hacking attempt, keeping your data safe and accessible.
5. Customer Service - Ultimately, you need a disaster recovery plan to provide your customers the service they have come to expect from you. If your business must shut down or has a prolonged service interruption, you could lose valuable customers to a competitor. The faster you can get back on your feet, the happier your clients will be.
Let’s look at how the three primary Business Continuity Plans fit together. Disaster Recovery Plans have a specific focus that provides multiple types of guidance (as shown in the diagram), and can be expanded based on your organization’s strategies. A strong disaster recovery strategy should start at the business level and determine which applications are most important to running the organization. The Recovery Time Objective (RTO) describes the target amount of time a business application can be down, typically measured in hours, minutes, or seconds. The Recovery Point Objective (RPO) describes the previous point in time when an application must be recovered. Recovery strategies detail an organization's steps for responding to an incident, while disaster recovery plans describe how the organization should respond. In determining a recovery strategy, organizations should consider a number of different things, including budget, resources, people and physical facilities, as well as management's position on risks, technology, data, and suppliers. Management approval of recovery strategies is essential. All strategies should align with the goals of the organization. Once disaster recovery strategies have been developed and approved, they can be translated into disaster recovery plans.
Infrastructure Recovery Plans focus on many types of infrastructure. Plans can be specific to the following areas:
1. Data centers - After the aggressive virtualization of servers and networks in data centers over the past few years, many networks now need to be redesigned to meet today’s business demands.
2. Cloud Strategy - Whether using a public cloud, private cloud or a hybrid mix, every organization needs a workable cloud strategy that can transform service delivery.
3. Mobile-first strategy - More and more businesses have adopted “mobile-first” strategies.
4. Telecommunications – Communications across locations, platforms, and devices have become more essential than ever for a majority of organizations.
5. Wireless - Faster network speeds, more WiFi availability, and increased reliability have created new challenges and opportunities for businesses to evolve.
6. Internal and External Networks – As networks have become increasingly complex, it has become more important than ever to understand the topology and dependencies required for a quick recovery.
Infrastructure Recovery Plans focus on specific areas (shown in the diagram) but are expanded by an organization’s unique strategies. Technology upgrades are essential to enable certain online services, which require an upgraded electronic transport infrastructure that is both safe and fast. In order to take full advantage of the explosive growth in data as well as new service opportunities, there is a desperate need for infrastructure modifications. The trouble is, achieving progress isn’t as simple as just buying new technology. New and innovative software, hardware, networks, tools, databases, monitoring equipment and more are available for purchase, but legacy systems often slow down progress dramatically. Industry experts have long recognized that the right mix of people, process, and technology is needed to integrate new solutions with solid infrastructure plans for recovery.
Application Recovery Plans have become as common as any other Business Continuity Plan. It documents the strategies, personnel, procedures and resources necessary to recover an application following any type of short- or long-term disruption. Maximize the value of contingency planning by establishing recovery plans that consist of the following phases:
1. Notification/Activation: Activate the plan and notify vendors, customers, employees, etc. of the recovery activities.
2. Recovery Phase: Recover and resume temporary IT operations on alternate hardware (equipment) and possibly at an alternate location.
3. Restoration Phase: Restore IT system processing capabilities to normal operations at either the primary location or the new location.
Start by preparing plans for any applications that are mission critical. Define the activities, procedures, and essential resources required during prolonged periods of disruption to help restore normal operations. Allocate responsibilities to designated personnel and provide guidance for recovery. Coordinate with other staff and important external contacts such as vendors and suppliers who will participate in the recovery process. Remember that applications evolve over time with updates and new revisions.
In conclusion, good Business Continuity and Disaster Recovery Plans will keep your company up and running through interruptions of any kind: power failures, IT system crashes, natural disasters, supply chain problems and beyond.
Here are absolute basics your plan should cover:
1. Develop and practice a contingency plan that includes a succession plan for your CEO.
2. Train backup employees to perform emergency tasks. The employees you count on to lead in an emergency will not always be available.
3. Determine offsite crisis meeting places and crisis communication plans for top executives. Practice crisis communication with employees, customers, and the outside world.
4. Invest in an alternate means of communication in case the phone networks go down.
5. Make sure that all employees, as well as executives, are involved in the exercises so that they get practice in responding to an emergency.
6. Make business continuity exercises realistic enough to tap into employees' emotions so that you can see how they will react when the situation gets stressful.
7. Form partnerships with local emergency response groups—firefighters, police and EMTs—to establish a good working relationship. Let them become familiar with your company and site.
8. Evaluate your company's performance during each test, and work toward constant improvement. Continuity exercises should reveal weaknesses.
9. Test your continuity plan regularly to reveal and accommodate changes. Technology, personnel, and facilities are in a constant state of flux at any company.
Sentinel has all of the solutions, advisory services, and training required to help ensure your organization is fully prepared in the event of a disaster or any other sort of emergency that can significantly impact your business. Please contact us if you are interested in learning more about our Business Continuity and Disaster Recovery planning services.
Frequently Asked Questions: Webex Calling
by Ron Boscaccy, Sentinel VP of Solution Engineering and Product Demonstration
Webex Calling takes the complex infrastructure and management required to maintain a traditional phone system and simplifies it through the cloud. Users can access the system across all types of devices and locations, making it easier to communicate and collaborate with co-workers, partners, and customers. To help give you a better idea of what Webex Calling is all about, Ron Boscaccy, Sentinel’s VP of Solution Engineering and Product Demonstration, provided answers to a few commonly asked questions.
What is Webex Calling?
Webex Calling is a Unified Communications as a Service (UCaaS) solution from Cisco. It’s a cloud-based service that functions as your phone system, so you don’t need any on premise hardware other than the phones themselves. Everything else sits up in the cloud. This is the latest evolution in voice technology.
What are some features of Webex Calling?
The nice thing about Webex Calling is that it’s tied into the whole Webex platform. It’s a very feature-rich system that’s similar to a traditional Private Branch Exchange (PBX) most businesses have today, but it also incorporates Webex Meetings so you can create shared meetings and bridges, as well as Cisco Teams, which gives you a portal for collaboration and sharing documents or other important information. There are mobile capabilities built into the system too, so you don’t have to worry about location. I can use a laptop, a standard landline phone, or even my cell phone to access the system and it’s all tied back to my corporate network.
Does Webex Calling work with different types of non-Cisco environments and solutions?
Yes, absolutely. The nice thing about Webex Calling is that not only is it compatible with Teams and other Cisco products, it also will work with a platform like Office 365. So even if you have a Microsoft platform and their suites, we can still connect and drive your calling while maintaining the communication and management of your applications.
What types of organizations could benefit most from Webex Calling?
There are two primary benefits to Webex Calling. The first is that companies with branch offices or multiple locations can consolidate and bring their phone systems together if they haven’t done so already. The second is that it enables companies to move from a CapEx to an OpEx pricing model. This can be done all at once, or shifted slowly to include any current hardware. A hybrid solution would allow you to move the branch offices at first before eventually expanding to include the corporate headquarters as well. This helps you save money over time while also unifying under a single system.
Are there any other noteworthy benefits to Webex Calling?
One of the biggest pluses about Webex Calling is that it utilizes Broadsoft technology. Broadsoft played a major role in the Public Switched Telephone Network (PSTN) and a number of other high profile phone projects for some very large carriers. They have been around for a long time and know how to create a powerful UCaaS solution. Most people have used Broadsoft technology without even knowing it. Now that Cisco bought them, they’re creating this tie-in with Webex and Webex Teams to bring it all into one platform. So it’s consolidating different applications and giving you a single pane of glass to manage it all.
If you are interested in learning more, Sentinel will be hosting a special Webex Calling event next Wednesday, May 29th at our headquarters in Downers Grove. There is a morning session and an afternoon session, so please register for one of them today if you are able to attend! If you are unable to attend the Webex Calling event but would still like some additional information about the solution, please contact us.
Five Major Announcements from Dell Technologies World 2019
The annual Dell Technologies World conference took place in Las Vegas last week, and a few employees and managers from Sentinel were among the 14,000 attendees at the four-day event. It was one of Dell Technologies’ biggest and most action-packed conferences to date, as the company laid out their roadmap for the future that included new products/services, greater integration with VMware, and a fresh strategy for growth. While there were many important announcements made during the event, here are five that we feel are particularly noteworthy.
VMware Cloud Integrates with Dell EMC
Dell Technologies introduced a powerful new consumption-based on premise Cloud Data Center-as-a-Service that integrates multiple VMware Cloud solutions into a Dell EMC infrastructure. This includes VMware Cloud Foundation, the VMware Cloud Stack, and the hyperconverged solution VxRail. When utilized properly, it will significantly improve public cloud power and agility for organizations while making it easier to manage on premise workloads. It’s also compatible across multi-cloud environments, creating a seamless infrastructure where minor day-to-day tasks are handled by VMware and Dell EMC so IT departments can focus more on innovation and growth. The Cloud Data Center-as-a-Service is expected to be available as a subscription-based service in the second half of this year.
New, More Powerful Switches
As enterprise organizations continue to generate and consume massive amounts of data, it’s more essential than ever to have switches able to handle the traffic coming from the cloud, on premise, and endpoints. With that in mind, Dell EMC announced a new line of open networking portfolio switches called PowerSwitch. The first model set for release is the PowerSwitch S5200-ON, which is 2.5x more powerful than previous Dell EMC switches and was designed with hyperconverged infrastructure (HCI) environments in mind. Its low density connectivity helps with automation and transitional changes across all different types of deployments or upgrades.
Dell EMC have bundled their hardware with VMware’s SD-WAN by Velocloud to create a powerful new software-defined networking solution that’s available in one-year or multi-year subscription. SD-WAN Edge is a network optimized server designed to run virtualized network functions. The goal is to provide a more cost-effective and flexible solution to make it easier for organizations to solve their problems at the edge. SD-WAN Edge is expected to be available this July.
The Dell EMC storage platform Unity gets a next-generation upgrade with the Unity XT. This new version was designed with NVMe drives in mind, and is both twice as fast as the original Unity and 67 percent faster than any other storage solution currently on the market. It’s optimized for up to 5:1 data reduction and perfect for smoothly shifting data to public cloud or multi-cloud environments.
Dell EMC Cloud Storage Services
Extend your data center into the public cloud with the new Dell EMC Cloud Storage Services. This high-speed, low-latency connection uses managed services to establish a seamless combination with Dell EMC’s Unity, PowerMax, and Isilon ata center storage lines. The initial Cloud Storage offerings will include Disaster Recovery as a Service (DRaaS) as well as multi-cloud access to perform workload analytics and testing/development.
Sentinel is proud to be a Dell EMC Platinum Partner, and we’re looking forward to sharing these new solutions and innovations with you as they become available. If you would like any additional information about these Dell Technologies World 2019 announcements and how they can benefit your organization, please don’t hesitate to contact us.
A Guide to Modern Password Security
by Jason Olmstead, Sentinel SOC Senior Exploitation Analyst
As of the latest draft version of the Security Configuration Baseline document for Windows 10 and Windows Server (versions 1903), Microsoft has dropped their recommendation for a password expiration policy for both operating systems. Previously, Microsoft’s baseline recommendation for password expiration policy would force users to change their passwords every 60 days. Prior to the 60-day recommendation, Microsoft’s recommendation was a forced change every 90 days. The theory behind having users change their passwords more often, and on a regular basis was that passwords would always be “fresh” and would be harder to compromise. Additionally, compromised passwords would be usable for a shorter period of time. A moving target is much harder to hit than a fixed target, right?
To answer that question, let’s first think about the most common ways that user credentials are compromised. There are three very common attack vectors that an attacker would use to target user credentials - social attacks, technical attacks, and reconnaissance attacks.
Social attacks are very common and are primarily composed of several types of phishing attacks. There are a few specific subtypes of phishing attacks, but in general the primary goal of a phishing attack is for an attacker to convince a user to hand over access credentials. A common way to achieve this is to provide the user with a fake login screen, typically on a webpage or other form, which looks authentic so that the user provides their credentials. Credentials are then captured by the attacker, and sometimes those valid credentials are passed to an authentic login mechanism so the user isn’t ever suspicious. Tools that automate this type of attack are freely available and not difficult to use. In order for social attacks to be successful, the attacker has to convince the user to perform an action which will result in compromised credentials.
Having to convince users means having to interact with them, and that opens the door for exposure to an attack attempt. If a user is wary of phishing and other social attacks, they could might alert their IT or security department. Although social attacks are still very common, most cyber criminals are looking for the path of least resistance. Technical or reconnaissance attacks tend to be a safer option.
Technical attacks rely on a hacker’s ability to compromise and exploit systems or networks in order to gain access to user credentials. These attacks do not rely on any communication with users directly, and often go undetected by IT staff. A common way of exploiting a Windows network in order to obtain user credentials is to use a tool like Responder, which exploits a flaw in the way Microsoft Link Local Multicast Name Resolution (LLMNR) works.
When a client attempts to access a trusted network resource, an LLMNR broadcast request gets sent in an attempt to locate the resource. Since the request is IP broadcast traffic, all other clients on the network segment are able to see that request. A tool like Responder will automatically respond to the client’s request and assume the role of the intended resource. Responder will respond to the request and ask for login credentials, which the client’s Windows machine is more than happy to provide. Responder captures the client’s username, NTLM/v2 password hash, domain/workgroup info, and IP address, then stops communicating with the client. After a certain period the session times out, and the client makes the request again. Responder knows that it has already captured information for that client (based on IP address), so Responder ignores subsequent requests and allows the appropriate network resource to respond. At this point the attacker can take the password hash offline to crack or use it in a pass-the-hash attack against other resources.
The third type of password attack has traditionally not been very common, but is gaining steam very quickly. Reconnaissance attacks involve collecting, indexing, and making searchable large databases of known usernames and passwords from questionable parts of the Internet (typically “deep web” and “dark web” sites). These databases contain usernames and passwords from users that have been involved in high-profile password breaches over the past several years. These password databases contain tens of billions of usernames and passwords from various system breaches ranging from the mid-2000s up to and including weeks before today’s date. Although these databases typically don’t contain password dumps from end-user systems (for example, an organization’s Active Directory database), they are still very valuable to attackers.
It’s pretty standard for users to sign up for services like Dropbox, Adobe, LinkedIn, Twitter, OneDrive, and other online services using their corporate email address as a username. It’s no secret that a very common practice among users is to reuse passwords across many sites, as doing so makes passwords easy to remember. Users also typically choose passwords that are only long enough to meet password requirements for any given system, so most passwords tend to be between 8 and 12 characters long and contain dictionary words or names. An attacker can use this information to derive a fairly powerful list of potential passwords for a corporate user based on their password history as exposed in breach databases. This list can be used against Internet-facing corporate systems in an attempt to brute-force a user’s login information, or it can be used in conjunction with a technical attack as described above to more easily crack a user’s stolen password hash.
Qualifying Microsoft’s Password Recommendation
Now that we understand the three most common vectors to obtain a user’s credentials, does Microsoft’s latest recommendation make sense?
There are several reasons why reconnaissance attacks are becoming so popular with attackers, and they’re closely tied the weaknesses we’ve all known about passwords for years. Users tend to select short, weak passwords that are easy to remember. Users tend to reuse these short, weak passwords across many sites and services. When forced to change a password at a regular interval, users tend to simply modify their existing password, typically by incrementing a number within the password or something similar. If systems prevent users from using dictionary words in their passwords, users tend to replace letters of a dictionary word with numbers or special characters that look like those letters (4 for A, ! for i, 0 for O, etc.) Modern password cracking tools have automated rules that exploit all of these weaknesses via automated guessing to the tune of literally billions of guesses per second using a modern GPU.
Knowing user habits in regard to the creation of weak passwords, and knowing how users typically only increment passwords when forced to change them on a regular basis, Microsoft understands that forcing a user to change a weak password every 3 months isn’t nearly as important as forcing the user to create a “long and strong” password once and allow them to use it for a much longer period of time. If a user increments a number at the end of weak, 8 character password, the password is still weak and can still be cracked in a matter of hours or days. If an attacker knows this password and the user changes it, the attacker can simply make logical guesses to figure out the “new” variant with ease. If a user isn’t forced to change a strong 24-character password for a year or two that’s fine, because the length of that password alone would take 20+ years to crack with modern technology.
So to answer the original question, yes, Microsoft’s recommendation of eliminating password expiration policies does make sense, however, the shortcomings of the initial password requirements should also be remedied. Microsoft acknowledges that password expiration policies don’t make systems any more secure. From the latest Microsoft Security Baseline document:
“Periodic password expiration is a defense only against the probability that a password (or hash) will be stolen during its validity interval and will be used by an unauthorized entity. If a password is never stolen, there’s no need to expire it. And if you have evidence that a password has been stolen, you would presumably act immediately rather than wait for expiration to fix the problem. … And if it’s not a given that passwords will be stolen, you acquire those problems for no benefit. Further, if your users are the kind who are willing to answer surveys in the parking lot that exchange a candy bar for their passwords, no password expiration policy will help you.”
Microsoft goes on to explain that they are not proposing that organizations weaken requirements for minimum password length, history, or complexity, as all of those factors combined are much more important than forced password expiration.
Ways to Improve Your Password Security
Ideally, we would do away with passwords altogether and use a better form of authentication. This is the dream, but the reality today is that many systems will still only allow authentication via passwords. Since our reality includes a system whereby users are expected to create and manage secure passwords themselves, the following steps should be taken to make the most of a less than optimal situation.
+Use multi-factor authentication (MFA) wherever possible. This is especially important on Internet-facing services like VPN and Outlook Web Access / Office 365. Often, organizations neglect the importance of securing email with MFA. If an attacker can access a user’s inbox via Outlook Web Access, s/he can dump the Global Address Book and easily harvest usernames for everyone inside the organization. If that email account can be used as a second factor avenue for something like a VPN login, the VPN would then be compromised as well.
+Enforce banned password lists in Active Directory. Microsoft provides a service through Azure AD that can be implemented with on premise Active Directory. This service automatically checks user passwords against a large database of known weak and compromised passwords, and if whatever password the user attempts to use matches a password in that database, the system will not allow the password to be used.
+Perform regular password audits against the Active Directory SAM database. As an administrator, it’s rather trivial to export a dump of the password hashes within the Active Directory database. This dump can then be provided to a trusted partner such as Sentinel, where a skilled penetration tester can use common password databases, breach password databases, and heuristic brute force attacks against hashes within the database to expose weak passwords. This will measure the effectiveness of an organization’s password policy, as well as the effectiveness of its users to choose secure passwords.
+Ensure that all Microsoft best practices are followed in regard to how passwords are handled from a technical perspective. Disable the use of LLMNR on the network via Group Policy. Require Server Message Block (SMB) signing to mitigate against SMB man-in-the-middle attacks, which can be used to expose password hashes. Add a DNS entry in Active Directory to mitigate the ability for attackers to exploit Web Proxy Auto-Discovery Protocol (WPAD) to obtain cleartext credentials from users.
+Educate users on the dangers of social engineering attacks. Sentinel’s Advisory Services team offers training to teach users how to identify many types of attacks, including obfuscated links in email, dangerous attachment types, forged emails that appear to come from a trusted source, and many others. Education like this helps users stay safe at work as well as at home.
+Encourage users to create passwords that are “long and strong.” A long and strong password should contain more than just lowercase letters, but doesn’t have to look like alphabet soup to be effective. A password like “S&&4$2j0*jf!!3Nmf)3=@+2&5” might take a long time to crack, but it’s impossible to remember, frustrating to type, and will get written down. Something like “&WeWentToTheZooLastWeek&” is easy to remember, is 24 characters long, contains characters from 3 attack sets (upper, lower, special) and should take a prohibitively long time to crack. Using “pass phrases” like this instead of “passwords” is quickly becoming popular to create long and strong passwords. To encourage the use of pass phrases, high minimum password length policies can be implemented, but only after users are educated on how to create easy to remember long and strong passwords.
Personally, I think Microsoft’s recommendation is a sound one, as long as the above information is followed. It serves to start a conversation about how we think of and formulate passwords today, and helps us to understand that they are typically the weakest link to an organization’s security. Taking steps to mitigate weak password creation and use, discouraging password reuse, and implementing multi-factor authentication wherever possible are solid steps in enhancing the security stigma that surrounds passwords.
If you are interested in learning more about password security and ways that Sentinel can help your organization stay safe from all types of threats, please contact us for additional information.
Frequently Asked Questions: Cisco DNA
by Robert Keblusek, Sentinel Chief Technology Officer
Cisco Digital Network Architecture (DNA) has exploded in popularity recently as an advanced software-defined networking platform that contains a number of different features and innovations designed to enhance business growth, agility, and security. There are a multitude of benefits worth exploring if your organization is interested in building or enhancing a software-defined network, and Sentinel is proud to offer DNA in a variety of formats and bundles to help our customers find the right solution for their specific environment. To help give you a better idea of what Cisco DNA is all about, Sentinel CTO Robert Keblusek provided answers to a few commonly asked questions:
What is Cisco DNA?
Cisco DNA is software-defined networking designed to support digitization efforts. This includes mobility, cloud, and Storage as a Service (SaaS) consumption, along with Internet of Things (IoT) services.
Software-defined networking eliminates the need to update firmware, software, and configurations on tens or hundreds of devices over many months. Studies show that 43% of network administrators’ time is spent troubleshooting, while 95% of overall IT tasks are done manually. Updating networks with security patches alone is a daunting task for most organizations, and can even require a team effort that includes CCIEs or a managed service provider. DNA makes these updates simple and fast through automation. Organizations then benefit from a more secure network with strong policy and governance while shifting their most skilled IT resources to focus on more impactful business needs.
How does DNA help with cloud and SaaS consumption?
DNA extends into the wide area network where SD-WAN services allow for a smart edge. This edge has security natively embedded and has the ability to think and route packets appropriately. Gartner estimates over 20% of Office 365 deployments struggle due to networking issues or latency. Software-defined networking such as DNA can be a solution to these issues. With Office 365 and content collaboration at the center of digitization efforts for many organizations, this is a big deal.
In the past, maybe 80% of your traffic went from the end user to your data centers. With cloud and SaaS, that traffic now goes to the cloud. This changes things and you need an agile, software-driven network to continually adapt to these needs.
How does DNA help with security?
DNA was also designed with security embedded instead of it being an afterthought. Security features such as deep inspection of encrypted traffic, rapid threat containment, profiling, posturing, and identity access settings are all easily deployed and maintained across your entire network.
What is a good way to get started?
DNA Assurance provides in-depth visibility to the transactions on the network and can minimize troubleshooting. In addition, users can resolve issues faster because Assurance empowers the help desk. Highly skilled staff no longer need to speculate on what might have occurred because they have real analytics showing detailed information. Assurance works with most existing Cisco networks and is a great start for organizations to build toward the full DNA software-defined experience.
Sentinel has some great FastPath bundles for DNA Assurance to help your organization see the value very quickly and economically. Please contact us for further details or if you have any additional questions about Cisco DNA.
Technology at the Movies: The Hummingbird Project
At Sentinel, we love movies. We’ve even been known to host a movie premiere or two for our customers. While superhero films and other blockbusters understandably attract a lot of attention, we also get excited about smaller movies, especially when the plots focus on technology and innovation. It can be a real kick to see a fictionalized version of an IT department or hackers launch a “cyber attack,” even if it bears little resemblance to actual reality.
One of the more recently released technology-focused films is the financial drama/thriller The Hummingbird Project. It received a limited U.S. theatrical release in mid-March and can still be seen in certain markets depending on where you live. The plot centers on Vincent and Anton (Jesse Eisenberg and Alexander Skarsgard), who are cousins and work together at a high-frequency trading brokerage firm in New York. Vincent is the hustler and big idea man, while Anton is the brains focused on developing new ways to help the firm gain a little extra edge over the competition.
Both Vincent and Anton are frustrated with their jobs and feel under-appreciated, so they hatch a plan to forge their own path in the world of high-frequency trading: Create a 4-inch wide, 1,000-mile long fiber optic cable that will go in a straight line from a stock exchange in Kansas City to a data center in New Jersey. Any Wall Street brokerage firm with access to that cable would receive a one millisecond (or one flap of a hummingbird’s wing) advantage on all trades, and in turn net hundreds of millions of dollars in profits.
The primary challenges they face are twofold: First, they need to find a way to forge a completely straight fiber optic cable path between their destinations that includes securing permissions and digging through privately-owned land, government-owned land, and the Appalachian Mountains. Second, the quality of the cable and straightness of the path don’t automatically provide that single millisecond boost in speed, so they need to develop a new mathematical algorithm to help them reach that point. Thankfully they have a multi-billionaire backing their project, so funding to pay landowners and drill teams and for specialty equipment is one of the least of their worries.
That’s the basic setup of The Hummingbird Project, but as the film moves forward things become increasingly complicated as Vincent and Anton face off against their ruthless former boss (Salma Hayek), stubborn landowners and environmental concerns, plus moments that threaten their physical and mental health. Of course if it were easy, that would make for a pretty boring and uneventful movie. As it stands, there’s a whole lot of plot to take in over the film’s two-hour runtime, and the shift away from the actual cable pipeline project to dive deeper into the personal lives and sentimental reflections of the two main characters feels just a bit cliché and a minor misstep from writer-director Kim Nguyen.
From a technology standpoint, The Hummingbird Project fares better than most when it comes to providing a realistic portrayal of working with fiber optic cables and data centers. Fiber splicing, cable installation, and data centers are all displayed with relative accuracy, and the filmmakers brought in IT industry experts as consultants to ensure the actors and production team understood the concepts and equipment being used. It’s also worth noting that the film is set in 2012, and the technology used to power high-frequency trading has already evolved well beyond the use of fiber optic cables (the characters smartly note the cable they’re installing will be obsolete within a few years). Still, there are plenty of interesting and innovative uses for fiber and other high-speed technology solutions today that go well beyond the financial markets.
If you’re interested in learning more about cabling or other data center solutions and services to help your business, please contact Sentinel for more information. We also work closely with plenty of finance customers, and would be happy to discuss the latest IT innovations that are powering the financial industry.
The Hummingbird Project is currently in limited theatrical release. Check to see if it’s still screening in your area by going here.