Multiple Paths to Compromise An Environment
Every stage of an attack starts with reconnaissance, from an external attacker's perspective, profiling a company's exterior footprint or an insider threat and identifying what systems to go after.
A wealth of information can be gathered during the reconnaissance phase of an assessment. I've written about the OSINT techniques that can be used on LinkedIn and GitHub to identify information about companies. To read more about this, please see my other post here.
Access to internal networks is often a key objective in an engagement or operation; the means to gain access vary, but the critical failings within environments, regardless of cloud-connected or on-premises, broadly fall into the following categories:
● Weak Passwords/Policies
● Insecure Handling of Sensitive Information/Data Governance
● Insufficient Network Segregation
These are the top three we see, with the fourth being misconfigurations and vulnerabilities; if access is not gained by the three above, it is typically achieved by some remote code execution. While these vulnerabilities can have patches and risk management, the other three categories often do not have a simple patch or fix. Still, several root causes can be remedied with the best policy and technical controls.
Weak Passwords
The age-old issue affects everything ranging from Windows domains to cloud-connected applications and everything in between. The most common finding, be it compromised credentials from external password spraying or cracked hashes from a DCSync attack, is that weak passwords are still, in 2022, at the forefront of issues affecting an organisation's assets.
Another subset issue is password reuse, whereby simply compromising one set of credentials opens many doors. Often users in non-technical roles do not appreciate why having a strong password is essential, and equally practising good password and data management is also important. Educating your workforce, peers, friends and family on the impact of what an attacker can do with their password is critical to hammering home the key importance of good practices.
Attacking an organisation usually starts with the recon phase, identifying passwords already breached in database dumps and previous hacks. Attackers use this data to profile an organisation by gaining initial access, often from previously compromised passwords, and allowing us to work out users' password policies and tendencies. Once credentials are obtained, the process follows credential stuffing attacks by hunting out portals or services used by an organisation and trying combinations of credentials until access is gained. Examples might be companyname123 or specific dictionary words followed by numbers; this varies based on companies' password policies but is often a good baseline to follow.
Combatting weak passwords is often a lot easier said than implemented, as users are more often than not likely to select words or phrases that they can easily remember and are often weak and easy to guess. At a high level, implementing password blocklists and auditing your environment regularly will help target inefficient practices. Following this up with educating users rather than threats and punishment fosters positivity and encourages users to value the security of their workplace and your organisation. Technical controls can be implemented to protect your organisation, and each provider has different integrations, examples like Azure password blacklisting are good examples of positive steps forward to reduce common password usage.
Data Governance
Data is gold dust when it comes to being an attacker. Along with weak passwords, one of the most significant areas where we as attackers, see success is the mismanagement of data governance within an organisation. As referenced in the recon section earlier, I've written about how data can be obtained from public resources such as LinkedIn and GitHub when employees overshare information about an organisation's goings-on and technology stack. Therefore, operating with good data governance across an organisation is incredibly important outside and inside the network. Covering everything from LinkedIn to File shares internally and all of the locations.
Essentially these types of faults tend to be remediated with policy, but it is essential to back up a policy with technical controls, too; they usually stem from harmful practices and habits learned by users from administrators littering data around an environment and being encouraged to put specific data in certain locations. This results in a lot of information being shared. As you can imagine, from an attacker’s perspective, it is perfect as it gives us many windows for attack and elevates privilege inside an environment. Still, often that information is contained with weak permissions, or all users can access everything.
SharePoint
SharePoint offers many pathways to credentials in files, and passwords are strewn across environments. They can often be the one piece of a puzzle an attacker needs once an attacker gains access to credentials, usually via an initial access vector of weak passwords, lack of multi-factor authentication, phishing, or a combination of the three. From an administrative perspective, think of how your environment is set up, what files you share on SharePoint, and how good your organisation is at protecting data. Simply searching for phrases such as password, passwd, pwd, credential & variations on those will return lots of helpful documentation and often hard-coded credentials. Often, SharePoint is a digital gold mine of information.
Also, with an increasing number of environments operating in a hybrid cloud setup, even more documents, credentials, and other sensitive information are shared on SharePoint and other similar platforms, lending themselves to an easy attack path to elevated permissions.
File Shares
File shares within a Windows estate tend to have a wealth of information. The primary risks associated with this are users not having sufficient access controls on shares within a network. Once an attacker gains access to an internal network, the first enumeration step will be after critical data and information. Searching file shares can be achieved using a variety of tools; the one I find the most success with is:
Snaffler (https://github.com/SnaffCon/Snaffler) works by enumerating computers from Active Directory and subsequent shares on those systems; it indexes the files and runs a series of regular expressions to work out juicy-looking data.
Like SharePoint, file shares are often littered with data and depend on retention policies and controls. These are usually several years old and frequently have valid credentials in them. Typically, scripts and excel spreadsheets have the highest percentage of success in gaining access to credentials; users tend to use spreadsheets as makeshift password management and can be easily searched due to the file structure, which is an excellent resource for an adversary.
How To Adopt a Better Approach
Where possible, establish an organisational policy that prohibits password storage in files. Alternately, restrict file shares to specific directories with access only to necessary users using technical controls to protect users from themselves and others. Additionally, shares with high-value content or used by administrators should be monitored for anomalous activity or standard users attempting to access; by doing so, such action may alert potential malicious activity in the environment. Equally, share permissions and user access should be regularly audited to ensure unnecessary employees who have left no longer have access. For example, IT Admin left the department two years ago, why do they still have access to the IT gold build shares.
Network file shares should be audited thoroughly to ensure that they are accessible to allowed users only. Some open-source tools designed for penetration testing can audit file shares and identify poor permissions and sensitive, exposed files.
Scripts used for maintenance and automation tasks should be written with security practices in mind to avoid leaking sensitive information and passwords for privileged accounts. It is recommended that passwords are not left hardcoded in the scripts. Microsoft provides APIs that enable secure password databases and encryption techniques; however, these need to be used correctly to benefit security.
It may not be possible to avoid ever having to record domain account passwords to enable the functionality of a given service. Such files must be protected as much as possible; accounts are unique to the purpose of the service they are being used on, and passwords are made strong and not repeated from another service.
Whenever possible, connection strings should be encrypted, depending on software support, and keys stored outside the server or protected with passwords not left hardcoded in a file.
Mitigating credentials in file shares is an issue of hygiene. It is vital to routinely remind employees to avoid storing credentials in files in shares as much as humanly possible. Some applications require credentials to be hardcoded into configuration files and scripts despite this risk. Where possible and on highly privileged accounts that may need credential material to be stored in shares, consider using System Access Control List (SACL) auditing to create event log entries for any attempted access to the locations of the material. A "Detailed File Share" event (Windows Event ID 5145) is generated whenever a process attempts to access a file or folder inside a network share. This event is not enabled by default but can be enabled using the Window Advanced Audit Policy in a Group Policy Object (GPO). An excellent approach to detecting mass share crawling is to use this event to check whether the number of unique files accessed by a given user within a certain period is significantly greater than the norm for that user. For example, suppose a user typically accesses ten individual files over the network on average on a given day and has done so over the last six months but is suddenly accessing over 10,000 separate files one day. In that case, this may be due to malicious crawling activity.
While adopting good practices around handling and storing information can be a manual task, as attackers, many automated tools are available at our disposal. As a defender, it is equally important to think like an attacker and leverage the tools we use to close the gaps in detection and remediation better. Automated crawlers such as Snaffler can be tuned for specific files containing keywords or extensions. The corporate file systems are dynamically changing and modified daily and require constant monitoring to ensure no oversights or misconfigurations.
Lack of Network Segregation
Insufficient network segregation is often identified within less mature environments, significantly increasing the attack surface and enabling more effortless lateral movement across different subnets. In a correctly segregated network, exposed services are reduced to a minimum and split into various network segments according to their purpose. Ultimately a service that cannot be accessed cannot be exploited directly.
A network environment that is not adequately segregated provides an attacker with the ability to perform ARP poisoning attacks on the network, allowing for the following vectors:
● Intercept all network services (SQL, HTTP, Telnet, RDP, VoIP, SMB, etc);
● Intercept traffic to and from external locations (by poisoning the local gateway or router);
● Intercept file transfers, backups, management protocols, etc.
● Steal credentials and downgrade authentication protocols to crack weakly encrypted protocols (Remote Desktop, SSL connections, SMB, etc).
Apart from potentially breaking into other sensitive servers, a mass client-side attack may also be launched against systems. Examples of this in the past have been worm-able malware such as Wannacry. Should one of the hosts be breached from an arbitrary location (internal or external, directly or not), then attackers gain direct access to the rest of the server and workstation pool of hosts in the network. Containing the breach becomes difficult in non-segregated networks, and a skilled attacker can make their presence very persistent by implementing multiple access routes in various systems.
Endpoint Detection and Response (EDR) products play an essential role but cannot be trusted for more than they are capable of. Often sensitive systems on the network can become exposed to an attacker either via segregation issues, firewall misconfigurations, or new vulnerabilities when segregation is not in place.
Combining The Weaknesses of Successful Attack
Attacking networks where users have weak passwords, systems are outdated, files are littered with sensitive information, and a lack of visibility from a defensive standpoint is equivalent to bringing a fire hose to a water fight. Using credentials to elevate privileges and leverage this access to traverse a poorly segregated network can often mean accessing systems over protocols like SMB and RDP are easy for remote access across a network and even worse when exposed to the internet! Life for an attacker can be straightforward, and it is one of the many reasons ransomware attacks are so prevalent inside immature environments.
Azure Environments
Azure environments, like on-premises, have a wealth of weaknesses, too; however, due to the nature of configuration, the amount of exposed information is reduced to enforced best practices. I have written one post about Azure in the past and attack paths, plus I have published my attack kit on GitHub, which you can check out.
Often taking, a bottom-up review of the security of your organisation should be performed to ensure that its assets are appropriately protected; this should encompass:
● Staff security awareness training should focus on user password management and security.
● A review of the segregation of your network and the steps that can be taken to architect segregation
● Reviewing your Windows estate, identifying weak data governance, and applying policies in practice with technical controls.
Finally, while the three root causes discussed in this post focus on attack paths, it is equally essential to ensure that, where possible, applying prevention in areas of the network and environment is always going to outplay detection and where it is not possible to prevent and harden an environment, apply ample detections and tune the response to protect your environment.