The past two years have shown that working remotely is a perfectly feasible form of office work that does not hinder productivity and contributes to employee well-being. However, the distributed character of telework poses security challenges that come with the elevated risk of compromise and data exposure. This blog lists eight best practices to make your remote work strategy more secure in 2022.
Weak password management, major leaks of user credentials, and the evolving threat landscape have rendered the use of passwords only an obsolete method of user authentication. Credential stuffing attacks are some of the most common as well as dangerous, since one leaked password can grant access to an entire interconnected IT ecosystem.
If you allow your users to access your important IT systems remotely, you should not rely on passwords solely and implement a strong prevention of unauthorized access.
Therefore, do not rely on passwords solely and insist on 2FA authentication via a private text message or email code, or multi-factor authentication that combines multiple methods of authentication based on what the users know (password), what they have (token, device), or what the users are (identity).
Company infrastructures, even those of many small and medium businesses, now span business premises, multiple on-site branches, private and public clouds, employee homes, and the public Internet. The attack surface is thus enormous and provides many tempting opportunities for threat actors to obtain access credentials or use other techniques to infiltrate critical systems.
To reduce the threat surface, it's essential not to open the business critical systems to everyone. IP whitelisting is considered a simple, yet very effective tool for this purpose. It is a method of preventing unauthorized access by only allowing trusted IP addresses to connect to the system (LAN, business system, database etc.).
A prerequisite of IP whitelisting is a fixed (static) IP address (see the difference between static and dynamic IP) that does not change over time and is owned exclusively by the target device/devices. You can lease one from your ISP or better, use a cloud VPN to deliver a dedicated static IP; read this article to learn more.
As a result, by using IP whitelisting on the server, you can easily hide online systems from the public eyes. Such systems are only available to users with the organization’s IP address, whether they connect from a private corporate network or through a VPN gateway, and are resistant to common network-based attacks by default.
Anyone curating personal or sensitive data has a responsibility to ensure its privacy, security, and protection against theft or misuse (under GDPR, this role is defined under the term Data Controller), and have a legal obligation to know and keep record of who accesses the data to verify it has been accessed for a legitimate purpose.
User activity monitoring and logging on all the network communication and systems is a fundamental tool that gives your company visibility into access history and is an essential component of implementing your compliance policy.
In addition, being able to review access and communication history is invaluable when investigating a breach post-compromise, as it is here that you can trace the adversary’s footsteps and repair the damage they caused.
Instead of trusting that user connections are implicitly legitimate, you should assign access on a strictly need-to basis (see zero-trust network access). This means you should allow access only to verified employees, even within your private network, and eliminate unnecessary permissions (so called the Least Privilege Principle).
For instance, if you let every company department access the CRM system, you are only increasing the number of potentially exploitable infiltration points without any practical tradeoff. Users should only access those systems in which they have a reason to be.
By restricting access by team, system, application, and IP address, you achieve a level of segmentation that helps contain developing breaches and prevent the compromise from escalating and spreading.
Remote has become a standard form of office work, and as such, it requires two things. Firstly, company data has to be protected, and, secondly, employees need a level of ergonomy to work well. To do this properly, there are 3 key things to keep in mind:
Always have a backup copy of your sensitive data available to be able to recover it in case your primary data is lost or compromised, and be sure to schedule the backups regularly to minimize data loss when recovering from an obsolete copy.
It is essential to use a separate repository, which, depending on your needs and available options, may range from a simple external drive to a disk storage system or a cloud container. Keeping your backups in remote locations is a sensible strategy to deal with the most catastrophic of events, such as building fires or natural disasters.
It goes without saying that having a backup copy ready can save a lot of time and money when you actually do need to recover your data. With a good backup, you can restore it quickly and easily, but without it, you may spend weeks getting your data back and would often have to enlist the help of a data recovery professional.
Application-layer encryption is provided by encryption applications running at endpoints, which ensure the confidentiality of data being transferred. This encryption scheme is broadly used on the internet nowadays, as it is a powerful, yet relatively easy-to-implement method of preserving the privacy of the transmitted data.
If you are a service provider or hold a domain, always be sure your encryption certificate is valid. You can also use open-source certificate authorities like Let’s Encrypt to handle certificate issuance and renewal.
However, application-layer encryption only encrypts the payload. To further minimize the attack surface, it’s good to combine it with network-layer encryption, which extends to the TCP header as well, and therefore protects information like DNS queries and servers from interception and misuse.
To implement network-layer encryption, you need to deploy a business cloud VPN, which creates a secure tunnel between two points identified by, e.g., IP addresses, and is highly suitable for network security scenarios where high speed and low latency are required.
The combination of both these encryption schemes both ensures the privacy of the payload throughout its transit to the recipient and hides the identity of the communicating parties from the public Internet.
You may define a strong password as one that is extremely difficult for a human to guess, is hard to crack by brute force, and does not feature in databases of common or leaked passwords used in dictionary attacks.
This means a password has to be unique, long, and include a varied mix of characters. The easiest way to come up with a password that meets all these three criteria is to generate it randomly.
Random password generation is a widespread feature among internet browsers today. Many suggest strong passwords in the password fields upon account creation, and even store the passwords for the user. The benefit of this is that you end up with a unique password for every online account you log into, so if one password gets snatched, access to the rest of your accounts is not compromised.
However, an even better approach is to use a dedicated password manager that stores all your passwords, generates new unique ones, and keeps them all safe behind a master password known only to you.
Password management eliminates the difficulty of having to remember passwords that human memory can’t retain, (see if you remember this: ;L8FDJFG1\4vn='[), let alone several of them, and thus removes the primary reason people don’t go for strong passwords in the first place.