Disabling NTLM in your Windows environment

NTLM (NT Lan Manager) has been around for quite some time and is a source of problems for network defenders as there are a number of issues with this form of authentication. NTLM credentials are usually stored in memory and can be easily extracted by an attacker using a tool like Mimikatz and the credentials can be also be used in pass the hash attacks. Tools like Responder can harvest NTLM credentials over the network just by pretending to be that network share a user tried to access within your network. So, getting rid of NTLM should be a priority for many but where do you start?

Even though Kerberos is now the default authentication protocol, most companies and organisations cant simply turn off NTML support. There are a lot of applications and systems that rely on NTML authentication to function properly.

So, what can you do as a security administrator to move away from NTLM? The number one thing is to get the facts on who and what is actually using NTLM for authentication. In Active Directory, there is a specific GPO setting that actually allows you to audit all those NTLM requests that would be blocked if NTLM was not allowed. Those events are logged and can be viewed in the Event Viewer on your domain controller or member server. By enabling this audit trail you can start to see what is actually using NTLM for authentication. If you have large environment with legacy applications my guess is that there will be a lot of entries in that particular log. The GPO as well as the other two GPO’s mentioned later on in this post are located at:

Computer Configuration\Windows Settings\Security Settings\Local Policies\Security Options
Network Security: Restrict NTLM: Audit NTLM authentication in this domain

You can set the value to audit only domain accounts or all accounts. If you use local accounts, make sure to set the value to all accounts for a complete log of NTLM use in your environment.

Once the GPO is active, the NTLM authentication requests are logged to the operational log located in Application and Services\Microsoft\Windows\NTLM log on every server where the GPO is set.

Second, once you know who and what that uses NTLM in your environment, see if you can migrate them to use kerberos instead, perhaps by using SPN. From personal experience, even really big commercial products sometimes have yet to migrate from using NTLM to Kerberos, some products require an upgrade or configuration changes. It all comes down to knowing which applications are relying on NTLM authentication which you now have a way of logging and find out.

Some applications will probably not be possible to migrate and if you have to keep them running, you can make an exception for just those applications, that is the third step. By allowing an exception, you can allow a particular server ot use NTLM for authentication even if NTLM has been disabled in your domain. This is very useful for minimising your NTLM authentication needs to a minimum. Look at the GPO listed below.

Network security: Restrict NTLM: Add server exceptions for NTLM authentication in this domain

Now that you know what uses NTLM, have either migrated or made an exception for them, you can finally disable NTLM all together by setting this GPO.

Restrict NTLM: NTLM authentication in this domain

This is the final step required to disable NTLM for your domain all together except for the exceptions you are forced to make for legacy applications. Hopefully you do not have to make any exceptions at all.

Be aware though that if you have missed something, it is a good chance that “something” will break. Also, this is not something you configure on friday morning and expect to be done by friday afternoon, it takes time. However, once complete, you have made it harder for the attackers as NTLM is no longer available as an attack vector at all or at least severely reduced. Kerberos is not without issues either, but that will be discussed in another blog post.

WMI filters in Group Policy gives me errors

When I started working with WMI filters to use in Group Policy, I was struck down by an error “Either the namespace entered is not a valid namespace on the local computer or you do not have access to this namespace on this computer”, the namespace being root\CIMv2.

After researching this issue on the Internet, I realised that I was not alone by any means, it seemed like a great deal of people had these problems. They felt their WMI queries were correct but Windows told them differently every time they saved the query. So, what to do about this?

First of all, make sure that you can list the root\CIMv2 namespace. You can do this by using Powershell with the following command:

PS1> (gwmi -namespace "root" -class "__Namespace" | Select Name)

You should see CIMv2 listed, otherwise you have bigger problems. Then you have to head into troubleshooting WMI and perhaps even repairing your WMI repository. You can read more about this at lansweeper.com. Microsoft also has a guide for WMI troubleshooting and also a specific tool called WMI Diagnosis Utility.

If you do see it, chances are that the error produced in GPMC (Group Policy Management Console) is actually just an irritating bug. I found two tools that can actually help you in building your queries, those being the Powershell “gwmi” command and the other being WMI Code Creator from Microsoft. The latter tool can produce code for querying WMI, but the reason I employ it is because it easily allows me to find all classes and parameters as well as query the properties on the machine I run it on. It can also query a remote machine. This allows me to check my WMI filters to see that they target the right type of computers or whatever the case may be.

The “gwmi” command is quite useful as it has a query flag which can be used like this:

PS1> gwmi -Query 'Select * from Win32_OperatingSystem where Version like "6.1%"'

This allows you to run your WMI query and check the output. The example above should produce output if you are running Windows 7. A very neat resource of WMI queries for different operating systems can be found on nogeekleftbehind.com.

WMI filters can be very powerful when employed in Group Policy. Instead of having to target every organisational unit that contains workstations running Windows 7, one WMI filter targeting all Windows 7 machines can be used instead. However, a word of caution as it is easy to make a mistake with your WMI filter and you can end up targeting other machines then the ones intended. This can produce some strange problems, so do test your WMI queries once or twice before deploying them.

I still get the same error as I started this post with when I try to save a WMI filter i GPMC, but it works never the less. The domain controller is fully patched, but it has not resolved the issue. For the time being, it seems that I have to live with this, but as long as it still works, I can deal with it.

EC-Council certification – Never again

A lot of websites and blogs out there uses WordPress at their platform, including me. It has tons of features, is quite easy to use and the security is not bad either as long as it is properly maintained. As for EC-Council, they must have missed something when it comes to security for their CMS which also uses WordPress.

As I have written before, anyone can get hacked, usually it is just a matter of time. It occurred to me today actually that I should tighten my own security measures a bit. The reason for this is that I get regular reports of script kiddies trying to brute force my username and password for this blog. Nothing new, it probably happens to most of us. My password is not found in any dictionary out there as far as I know, but anyway, given enough time, even a script kiddie can get lucky. So, by adding multi-factor authentication as a requirement for logging in to my blog, I have made it a lot harder to gain unauthorized access. It was simple to do, took less than 10 minutes and there was no cost except for the 10 minutes spent setting it up. Also, thanks to my hosting provider, I enabled SSL for this site which means my username and password is not submitted in clear text anymore. Yes of course, I changed the password for this blog over SSL to make sure I am the only one with knowledge about my password.

Being hacked is one thing, being hacked and having your website infecting visitors with one of the worst pieces of malware out there is even more troublesome. The thing that makes it beyond bad is the fact that EC-Council certifies people for having skills in IT security, myself included. Teaching people and not living as you teach and preach is perhaps the best way of losing everyones respect. In an industry that need more skilled professionals, actions taken by EC-Council is not what we want to see. Several people have argued about the fact that once EC-Council knew about the exploit kit, their site should have been removed from the network and reinstated once it was cleaned. Now it remained on the network for several days and by doing do, they might have helped spreading the exploit kit to unsuspecting visitors. To me, this behaviour is not OK, it simply is not. I can appreciate the shame and guilt that comes with being hacked, but acting responsibly could have at least restored some credibility on their behalf, instead they did the opposite.

Honestly, the CEH certification that I have is not worth much as it does not really prove much. However, the training I took was great. The material was OK, the teacher was excellent, an older british gentleman with a background from GCHQ. He knew his stuff very well, so I learned a lot that week, no doubt about that.

EC-Council has cleaned their website as of march 26th according to their announcement, but I find it quite interesting that they made the announcement on Facebook and Twitter. Should they at least not on their website acknowledge the fact and even more, informing visitors of what could have infected them and given them guidelines on how they could check if they became victim of this attack? Instead, they go with the silent treatment in hope that this incident will pass and fade over time.

I just wonder, does EC-Council really have a future after this and their DNS compromise in 2014? I do not know, but I will not keep my CEH certification once it expires. It did not stand for much before all this and now I feel it does not stand for anything good at all. So long EC-Council.

The blog post about EC-Council serving a exploit kit can be found at http://blog.fox-it.com/2016/03/24/website-of-security-certification-provider-spreading-ransomware/

Security policy do and dont’s

When it comes to end user security policies, there are several paths to take. One that I do not particularly like is the kind of policy that simply states that the user has to know every policy and knowing how to act in every way, and if you make a mistake, we are gonna punish you for it. To me, that is not a good written policy, it is a document that was published to make sure senior management can say they are not responsible if anything goes wrong. This is wrong from two perspectives. It does not do the user any good, and if something bad does happen that damages the company, senior management are still responsible.

As an example, having a policy that clearly states that you as a user are not allowed to click on a malicious link, it defies every bit of logic. How is the user suppose to know beforehand that particular link is malicious? After the fact they may realise it, but beforehand? Not a chance. First, what is a malicious link? The link in itself could point anywhere, even to a legitimate website that may have been compromised earlier. The URL string or name is not a good hint for finding a malicious link. So, how is the user suppose to follow the policy? The user is not able to which effectively means that the policy statement is pretty much useless.

Every user can be tricked into clicking a malicious link. Most of us receive a lot of emails which includes links to material online. Do you really check every link you click to read that PDF report or whatever it may be? Probably not, you are probably putting your trust into your company spam filter and anti-virus software that is suppose to keep those malicious emails away from you. You probably trust those technologies a lot more than you realise. Ever since we got short links in our emails, such as bit.ly/abcde, it became even harder to manually inspect links in emails and other forms of content. URL rewrites and redirects are common, so it is virtually impossible to predict where you are going to end up once you click on a link in an email, unless it is an internal email. But even internal resources could be compromised, so just because the URL points to an internal resource, it is not an automatic all clear.

To me, a good end user security policy reminds users of certain rules that need to be adhered to and how they can act to try to remain as safe as possible. There are always gonna be certain rules an employee need to follow, but it must be possible and also simple to do it, otherwise the policy will not be effective.

As an example, sharing internal documents with people outside of your company or organisation is usually not permitted. But in order for a user to follow that rule, there are number of things that need to be in place. First and foremost, a document must be clearly labeled in a way that the user understands. Second, there must be a simple process with a corresponding IT support system to allow user to tag or label documents. Also, there must be a very clear and simple statement on how to tag or label a document to a particular security level. In any of these are missing or unclearly defined, the policy again will fail. If a user cant follow the steps without to much effort, they will not. It is human nature, we are lazy creatures. It is one of the most common mistakes I have personally witnessed throughout my career, having a policy that is virtually impossible to adhere to because the supporting processes and tools are not available.

One of the most important things about any security policy is that it must contain contact information in case users need clarification of the policy content. Interpretation of a policy can result in a very different outcome depending on user perspective, so if possible, keep the language as simple as possible. Avoid to much technical terms as it may confuse users into doing things the wrong way. Also, policy violation can of course not be tolerated, but the threat of punishment is a not good way to getting users compliant. Reward feedback on your security policy allows you as a security officer to enhance it and making sure that end users tend to accept the policy not because they must, but because it aligns with their sense of what security measures are adequate. Trying for force cooperation almost never ends well, and in the end, you want users to adhere to the policy guidelines, not trying to circumvent it because they feel it is getting in the way of how they want to work or accomplish things in their day to day activities.

In order to achieve that goal, you have to have a dialogue with your users, and understand the business model of your company or organisation. If your policy goes against the business model or business needs, it will not be accepted and then it will not benefit anyone. This is perhaps one of the toughest challenges for many security officers, aligning security requirements with the company business model and needs. Thats why it is very important to have senior management onboard when it comes to security strategies so that they align with the business model. Being a CSO without direct access to senior management can be quite a pain when trying to gain acceptance for security policies.

As for advising your users on good security practices, again, it must be easy for them to do so. It should be the obvious way of behaving. Good security practices that are widely accepted by users tend to be transparent. The security is there, but users do not really see it as a security measure. Again, the software used by your company can make or break you as a security officer when it comes to acceptance. Most companies do not condone the use of Dropbox and similar services and often informs users that it is not allowed. What most companies tend to miss is the obvious question why users are turning to Dropbox instead of using your internal document management process and IT system. User friendliness is a key component in gaining acceptance from users, and in order to get good security you have to get acceptance from your users. Do not punish them for turning to a user friendly alternative if the internal tool is difficult or cumbersome to use. Rather try to influence IT in the right direction so your job as a security officer becomes easier. If you have to constantly remind users that they are violating the security policy, there is obviously something wrong with it.

In my opinion, most users want to do the right thing and stay secure, you as the security officer just have to make sure that they can do so in a way that is acceptable to them. So, good luck in writing your security policy. Your users will, if you let them, let you know if they feel you succeeded or not.

Keeping your personal data secure

Everyone has the need to write things down. It can be notes, passwords or card information, or basically anything. When you do write it down, there are two things you probably like to have. These are security and synchronisation, and if you can get it, no or low cost.

There are a lot of password managers out there where you can store passwords and other data and keep it synced between devices. So which one to use?

Synchronisation is a nice feature to have which allows you to keep your data with you at all times. Having the same data on your laptop and phone means easy access to your data.

Security is another matter. Basically all password managers is secured by a master password, so your security is really dependant on that single password. However, most password managers does allow you to restore your password if you should forget it, which can be a good thing. On the other hand, this also means that the provider of the password manager has your password in some form. Most of them will store this in en encrypted format, but none the less, they are able to read your data as they are in possession of the master key used to encrypt your password. This also means that if asked by certain authorities, they can unlock your data which you probably would like to keep private.

Another benefit of using a password manager is it allows you to to have separate passwords for different websites and such instead of using the same password everywhere. In our digital world where we frequently visits a lot of different websites, it is quite hard for most people to remember a large number of passwords. A password manager allows you to keep track of these different passwords by remembering one strong password.

Having the same passwords on multiple websites is a bad idea. If one of these websites is breached your password may be exposed. Once that occurs, that password used in combination with other data about you, such as your username and email address is used to try to login to other websites as well. This has been the downfall of many users online.

Many password managers can automatically fill in your details as they are part of your web browser. Convenient, but as far as I know, all those suffer from the detail of knowing your password and having it stored, which I am not a fan of. So, personally I use a password manager where the provider has no knowledge of my chosen password, not even my username. So, if I forget either my username or my password, I will be unable to recover my data. This is a risk I am willing to take.

The password manager I use is free, available on multiple platforms, keeps my data synced and is as secure as it can get, and my password is not available to anyone but me. You can find it for yourself at the following url:

SpiderOak Encryptr

Whether this is the right one for you, that is for you to decide.