Written By Dr Al Hartmann And Presented By Ziften CEO Chuck Leaver
The dissolving of the conventional perimeter is occurring quickly. So where does this leave the endpoint?
Financial investment in boundary security, as defined by firewall software, managed gateways and invasion detection/prevention systems (IDS/IPS), is altering. Investments are being questioned, with returns not able to overcome the expenses and complexity to create, maintain, and validate these old defenses.
Not only that, the paradigm has actually changed – employees are no longer specifically operating in the office. Lots of people are logging time from home or while traveling – neither area is under the umbrella of a firewall. Instead of keeping the cyber criminals out, firewalls often have the opposite effect – they prevent the good guys from being productive. The irony? They create a safe haven for hackers to breach and hide for many weeks, then pass through to critical systems.
So What Has Changed A lot?
The endpoint has become the last line of defense. With the above mentioned failure in perimeter defense and a “mobile everywhere” labor force, we should now implement trust at the endpoint. Easier stated than done, nevertheless.
In the endpoint space, identity & access management (IAM) systems are not the silver bullet. Even ingenious businesses like Okta, OneLogin, and cloud proxy suppliers such as Blue Coat and Zscaler can not conquer one simple truth: trust goes beyond simple identification, authentication, and authorization.
Encryption is a second effort at protecting whole libraries and selected assets. In the most recent (2016) Ponemon research study on data breaches, file encryption only saved 10% of the cost per breached record (from $158 to $142). This isn’t the panacea that some make it seem.
The Whole Picture is altering.
Organizations needs to be prepared to accept brand-new paradigms and attack vectors. While organizations must supply access to trusted groups and individuals, they need to resolve this in a much better way.
Vital business systems are now accessed from anywhere, whenever, not simply from desks in corporate office complexes. And specialists (contingent workforce) are quickly comprising more than half of the total enterprise workforce.
On endpoint devices, the binary is predominantly the problem. Most likely benign incidents, such as an executable crash, could indicate something simple – like Windows 10 Desktop Manager (DWM) rebooting. Or it might be a much deeper issue, such as a destructive file or early signs of an attack.
Trusted access does not fix this vulnerability. According to the Ponemon Institute, between 70% and 90% of all attacks are caused by human mistakes, social engineering, or other human elements. This needs more than easy IAM – it needs behavioral analysis.
Instead of making good much better, border and identity access companies made bad much faster.
When and Where Does the Bright Side Begin?
Taking a step back, Google (Alphabet Corp) revealed a perimeter-less network model in late 2014, and has made considerable development. Other businesses – from corporations to federal governments – have actually done this (in silence and less extremely), but BeyondCorp has done this and shown its solution to the world. The design viewpoint, endpoint plus (public) cloud displacing cloistered enterprise network, is the essential concept.
This alters the whole discussion on an endpoint – be it a laptop, desktop, workstation, or server – as subservient to the corporate/enterprise/private/ organization network. The endpoint really is the last line of defense, and must be protected – yet likewise report its activity.
Unlike the conventional border security design, BeyondCorp does not gate access to services and tools based on a user’s physical place or the stemming network; instead, access policies are based upon info about a device, its state, and its associated user. BeyondCorp considers both internal networks and external networks to be entirely untrusted, and gates access to applications by dynamically asserting and imposing levels, or “tiers,” of access.
By itself, this seems harmless. But the reality is that this is a radical brand-new model which is imperfect. The access requirements have moved from network addresses to device trust levels, and the network is heavily segmented by VLAN’s, rather than a central model with capacity for breaches, hacks, and hazards at the human level (the “soft chewy center”).
The bright side? Breaching the boundary is incredibly challenging for potential hackers, while making network pivoting next to impossible when past the reverse proxy (a typical system utilized by cyber attackers today – proving that firewall software do a much better job of keeping the bad guys in instead of letting the genuine users get out). The inverse design further applies to Google cloud servers, probably securely handled, inside the border, versus client endpoints, who are all just about everywhere.
Google has actually done some nice refinements on proven security approaches, significantly to 802.1 X and Radius, bundled it as the BeyondCorp architecture, consisting of strong identity and access management (IAM).
Why is this essential? Exactly what are the gaps?
Ziften believes in this approach because it stresses device trust over network trust. Nevertheless, Google doesn’t specifically show a device security agent or stress any type of client-side tracking (apart from really strict setup control). While there may be reporting and forensics, this is something which every company should be knowledgeable about, because it’s a question of when – not if – bad things will take place.
Because executing the initial phases of the Device Inventory Service, we’ve ingested billions of deltas from over 15 data sources, at a normal rate of about 3 million daily, amounting to over 80 terabytes. Retaining historic data is important in allowing us to comprehend the end-to-end life cycle of a particular device, track and examine fleet-wide trends, and perform security audits and forensic investigations.
This is an expensive and data-heavy process with two shortcomings. On ultra-high-speed networks (made use of by organizations such as Google, universities and research organizations), sufficient bandwidth allows for this type of communication to occur without flooding the pipelines. The first problem is that in more pedestrian business and federal government situations, this would trigger excessive user interruption.
Second, computing devices must have the horsepower to continuously gather and send data. While most staff members would be delighted to have current developer-class workstations at their disposal, the expense of the devices and process of revitalizing them on a regular basis makes this over the top.
A Lack of Lateral Visibility
Few products really produce ‘improved’ netflow, enhancing standard network visibility with rich, contextual data.
Ziften’s trademarked ZFlow ™ offers network flow details on data created from the endpoint, otherwise achieved utilizing brute force (human labor) or pricey network devices.
ZFlow acts as a “connective tissue” of sorts, which extends and finishes the end-to-end network visibility cycle, including context to on-network, off-network and cloud servers/endpoints, allowing security teams to make quicker and more informed and accurate decisions. In essence, purchasing Ziften services lead to a labor cost saving, plus an increase in speed-to-discovery and time-to-remediation due to innovation serving as a substitute for people resources.
For organizations moving/migrating to the cloud (as 56% are planning to do by 2021 in accordance with IDG Enterprise’s 2015 Cloud Survey), Ziften uses unmatched visibility into cloud servers to better monitor and secure the complete infrastructure.
In Google’s environment, just corporate-owned devices (COPE) are allowed, while crowding out bring-your-own-device (BYOD). This works for a company like Google that can distribute new devices to all personnel – phone, tablet, laptop computer, and so on. Part of the reason for that is the vesting of identity in the device itself, plus user authentication as usual. The device must satisfy Google requirements, having either a TPM or a software application equivalent of a TPM, to hold the X. 509 cert utilized to verify device identity and to assist in device-specific traffic encryption. There needs to be several agents on each endpoint to validate the device validation predicates called out in the access policy, which is where Ziften would have to partner with the systems management agent supplier, since it is most likely that agent cooperation is vital to the procedure.
In summary, Google has established a first-rate service, but its applicability and practicality is limited to companies like Alphabet.
Ziften uses the same level of operational visibility and security defense to the masses, using a light-weight agent, metadata/network flow monitoring (from the endpoint), and a best-in-class console. For organizations with specialized needs or incumbent tools, Ziften provides both an open REST API and an extension framework (to enhance consumption of data and activating response actions).
This yields the benefits of the BeyondCorp model to the masses, while protecting network bandwidth and endpoint (machine) computing resources. As companies will be slow to move totally far from the enterprise network, Ziften partners with firewall program and SIEM vendors.
Lastly, the security landscape is gradually moving towards managed detection & response (MDR). Managed security suppliers (MSSP’s) provide conventional tracking and management of firewall programs, gateways and border invasion detection, but this is insufficient. They lack the abilities and the technology.
Ziften’s system has been evaluated, integrated, authorized and executed by a variety of the emerging MDR’s, illustrating the standardization (ability) and versatility of the Ziften platform to play a key role in removal and incident response.