Cyber Defense Exercise

Recently I took part in a Cyber Defense Exercise aimed at Critical Infrastructure at the Swedish Defense Research Agency, being a member on one of two designated Blue Teams. Each Blue Team was defending their production environment against a Red Team.

This exercise was carried out over 3 days with presentations and evaluations on top of the actual “hands-on” exercise where we were really beat down by the Red Team. We had a number of things stacked against us, an unknown network which was not patched and running outdated software and operating systems as well as internal staff who had hidden Trojans and other malicious code on our machines. On top of that, our firewalls were not configured very well either. As for our team, we did not know each other which also added to the internal chaos. Even though we came out as the better Blue Team, there a a number of things that I learned that I thought I would share.

1. Incident Detection

This was one of things that we were taught to pay attention to. Detection is very hard even though you have an IDS running, just interpreting what you see can be tough and this requires skill. There a number of IDS/IPS solutions out there, and I do not care which one you use, you need training to be able to utilize it properly. We had Snorby running, which is a web-based GUI for IDS, a screenshot seen below. Even though we had this, it was not easy detecting exactly what was going on.

Snorby

Another thing was visualization. If you visualize the network is it also a lot easier to actually see what is happening in the network. There are different tools out there that can do this, but a open source one is EtherApe. This tool can present a visual representation of your network if it is able to listen on the network traffic as seen from the screenshot below.

etherape

Last but not least is system monitoring, to actually know what is happening on your clients and servers. Central logging with an analysis engine running is not a bad idea, to perhaps catch when a user account is being added locally on a system. Again, there a number of tools for central logging and analysis. This is often referred to as SIEM, Security Information and Event Management, and there a number of tools for this as well. I cant argue with the fact that without central management of what is happening on your network and on your systems, it is harder to detect an incident. The only thing I want to really mark as extremely important is the use of time synchronization, without it, SIEM is not gonna work as expected. Every event captured must have a mutual understanding of what the time is, otherwise correlating events and logs is impossible.

2. Incident Response

When you try to resolve an incident, there a number of things to think about. The point is that when you are stressed and you do not have a plan or organization to work with, incident response falls apart. Planning and organization is key, and this is something you have to address before an incident occurs, and evaluate after each incident. The key here is to assign roles with has duties but also mandate to act, such as a site manager, and IT security officer, perhaps a network engineer, and IDS engineer and so on … it all depends on your internal structures. To have all these roles designated is vital as it allows you to assign the correct people with the correct skills to an incident. Once something happens you usually have very little time to invent things like an incident policy nor an incident organization.

One key factor is the skill of your team. An incident response team needs training, simple as that. At least try and run drills in house using simulations, these can be quite effective. Spending time doing this might prove it’s worth once something really happens. If you are unprepared when an incident occurs, your incident response will simply not be very efficient. There is a reason that firefighters train as much as they do, being unprepared is is simply not an option for them, and you should consider having the same mindset in your organization.

3. Be Prepared

This has already been said, but I cant stress this enough. Once the shit hits the fan so to speak, unless you have prepared to deal with it, it is gonna hurt a lot more than if you did. Put in the real work before hand and you will manage an incident a lot better. Handling an incident is almost never easy but it can quickly become overwhelming if you are unprepared.

4. Learn from it

Last but certainly not least, learn from every incident. What did you do well and perhaps not so well? Did your plan and organization work as expected, or perhaps something needs to change? Be your own worst critic, you are the one who will benefit from it the next time an incident occurs.

SHIPS have set sail, part II

SHIPS as I wrote about in my last post is a system for rotating admin password developed by TrustedSec. It handles both Linux and Windows systems which is great. I read the documentation, I have yet to try it, but I just want to point out a few things that I noticed when reading it.

1. The initial installation of SHIPS is a number of manual steps

This is actually quite necessary as this also builds understanding of how SHIPS works, which is good. The downside of it is that it do take some time to get this up and running. On the other hand, it solves a problem which has bugged sysadmins for years so I can definitely live with it.

2. Dependencies

The required packages of Ruby are easily installed on most Linux distributions, but there is one thing that perhaps is missing a bit and that is the need to have a working PKI (Public Key Infrastructure). The reason for this is that SHIPS uses SSL when communicating between the client and the server, and the client needs to trust the server SSL certificate. Your company or organization should have a CA which can sign the SHIPS server SSL certificate for it to be trusted by your clients. If you dont have this, you will have a harder time setting up the clients. It is possible to work around this by trusting individual self-signed SSL certificates on individual clients, but it is not recommended. There is also an option to use curl (which is used by the Linux clients to communicate with the SHIPS server) with the insecure mode, thereby not validating the SHIPS server SSL cert. Do not use this, instead, if you plan to use SHIPS, make sure you have a PKI to support it to make it work smoother.

3. Idents

Idents are used for managing objects within SHIPS, such as validating which users that can login to SHIPS and retrieve and set passwords as well as managing authorized clients which can connect. Authentication idents can be /etc/shadow, SQLite (database) and finally external using the LDAP protocol for querying users and computers. When it comes to the actual clients, these are handled in arrays unless you using either LDAP or simply allowing any client to connect to SHIPS. If you do allow any client to connect without validating the clients name which is not recommended, it is a possible way to perform a DoS attack against the SHIPS server database. In an enterprise, most would probably go with the LDAP option and most enterprises rely on Active Directory. That does not mean that you are running LDAP. A Windows domain controller does not run LDAP unless you have installed it, it is not included per default when installing Active Directory Domain Services. So, you might have to actually install and configure LDAP in your environment first, and that takes some planning. It is not impossible by any means, but it is additional step that you should be aware of if you are to implement SHIPS using a LDAP as the ident store.

As a last not about idents, it is possible to develop your own ident and integrate it with SHIPS.

4. LDAP

LDAP uses port 389 and one should remember that this is a clear text protocol. If you use LDAP as the ident store for managing the SHIPS interface, there is a possibility to sniff data between the SHIPS server and the LDAP server. This might not be a very big problem, but it is possible to use TLS with LDAP over port 636 which would have been better. This is something I would like to see added to SHIPS if it is doable. The authentication between the SHIPS administrators client and the SHIPS server is using SSL so it is protected, but not the LDAP request from SHIPS to the LDAP server.

Summary

I think SHIPS is a great solution, quite capable even though perhaps a bit tricky to get it up and running. Sysadmins who expect a simple installer will be disappointed, but as I stated in the beginning, the manual steps adds to ones understanding of how SHIPS really works, and that is very important. I will try to test this once I have everything set up to really support SHIPS, not just getting one with it. SHIPS looks like a quality tool and a great project and I want to test it in the best way I can. I will write about my experience on testing SHIPS later on.

SHIPS have set sail

I was most pleased when I saw the release of the SHIPS software from TrustedSec. The problem with managing local admin accounts could be a thing of the past with this tool, and the best thing about it, it is open source. The idea about SHIPS is rotating the local admin password with a random generated password. It is client and server based, so you have a server part which holds the passwords in encrypted form, as well as a client part which sets the actual password for local admin user on every box where you have the SHIPS client installed. It can be installed on laptops, desktops and servers, it does not really matter, as long as it is running Windows. The communication between the SHIPS server and client relies on HTTPS so nothing is transmitted or stored in clear text.

The most used solution today which is is a tool called AdmPWD does not support encryption in the version that is publicly available, passwords are stored in clear text in Active Directory. Not everyone can read that attribute but it would feel better knowing those passwords were indeed encrypted. With SHIPS, that problem is solved.

This looks like a great boost for everyone on a blue team as this has been and still is a real hassle. This will definitely make the life harder for any penetration tester. I cant wait to try this out. Thanks to TrustedSec for releasing this tool, awesome job, now I just wait for a Linux version!

Vmware PowerCLI credentials

Vmware PowerCLI is a very powerful tool for managing a Vmware Infrastructure using Powershell. Stopping and starting virtual machines, and a ton of other stuff is available to you as an administrator. It is also quite useful for automation using scripts. However, when using scripts, the credentials you provide to connect to your vSphere or vCenter host is not something that should be exposed in clear text. The solution for this is to use the safe store mechanism that is available and allows you to safely store usernames and passwords and access them later on in your scripts. Basically you store credentials for a specific vSphere or Virtual Center server in an encrypted form where they will remain safe from prying eyes.

The Powershell command to use is the New-VICredentialStoreItem which takes the following parameters, Host, User and Password.

So, to safely store credentials for a host I could enter the following:

New-VICredentialStoreItem -Host virtualcenter.local -User demo -Password P@ssw0rd

Once this is done I could simply connect to my Virtual Center server using the stored credentials as follows:

Connect-VIServer virtualcenter.local

Password management for shared passwords

How do you in your company or organization store passwords that are shared among staff or teams? I hope you do not use an unencrypted spreadsheet located on a network share in a folder named Passwords.

Most probably use a password manager application such as Keepass and even though it is open source (which I like) and sounds very good, it has weaknesses. Even though it uses AES 256 bit encryption, the encryption is no good if the passphrase is chosen poorly. This is not a flaw of just Keepass, but any password manager software. Second, even if the passphrase is very good, it can still be defeated by a key-logger which sniffs the passphrase. So, no real joy.

Many password managers also comes with browser extensions for easy fill in the password functionality when browsing the web. This has been proven to be quite a bad idea as all it took was some javascript to defeat this. Kevin Mitnick and Dave Kennedy showed a demo of this at Derbycon, see this youtube video.

But the problem is still there, how do you share passwords in a secure manner? The storage of the passwords needs to be encrypted or secured in a physical way to prevent unauthorized access. But it also needs to be available for the people who require them. There in lies the problem, and it is a very difficult problem.

I have no real answer to this question, but there are things you can do that will slow an attacker down. Use a standalone network which cant be accessed from the Internet, actually it should only be accessible from a secure physical location. Do that, and the attacker will have to go to great lengths to retrieve data. It is when we get sloppy and just save our passwords in plain text on a network share that the attackers get that easy win. Do not make it that easy, make the extra effort and stay safe.