I’ve noticed many who are having difficulty getting SMTP STARTTLS to work through a relatively modern Cisco ASA, simply just disable ESMTP inspection altogether.
A much more elegant way to permit this is in the following fashion, based largely on Cisco’s Configuring Inspection of Basic Internet Protocols (which should be mandatory reading to any Cisco ASA operator.)
1 2 3 4 5 6 7 8 9
By default, global_policy is applied globally (
service-policy global_policy global)
so nothing additional should be needed. The above configuration is
confirmed to work on 8.2 and newer.
I sometimes make assumptions that certain things in the realm of Information Security are obvious. Clearly, I need to break myself of this habit.
I had the opportunity to receive an email today asking that I submit a CSR for signing. In the attached zip file was the corresponding private key.
So, for those who haven’t thought about this lately, here’s a reminder.
NOTE: I have since reread this, and discovered it to be horribly rambling. YMMV. -cjp
I refer to RFC 3647, While this document is stylized more towards potential Certificate Authorities, section 4.6.2, addresses a number of questions anyone handling private key material should ask of themselves and of whomever they delegate control of private key material to.
What standards, if any, are required for the cryptographic module used to generate the keys? A cryptographic module can be composed of hardware, software, firmware, or any combination of them. For example, are the keys certified by the infrastructure required to be generated using modules compliant with the US FIPS 140-1? If so, what is the required FIPS 140-1 level of the module? Are there any other engineering or other controls relating to a cryptographic module, such as the identification of the cryptographic module boundary, input/output, roles and services, finite state machine, physical security, software security, operating system security, algorithm compliance, electromagnetic compatibility, and self tests.
Essentially, if your cryptographic module is faulty, weak or compromised, you’re potentially not going to get good keypairs from it. NIST came up with the FIPS 140 standard as a basis for validation of cryptographic modules.
The current version of FIPS 140 is FIPS 140-2.
FIPS 140-2 validated cryptographic modules have met the standards put forward by NIST. Claims of “our product uses FIPS 140-2 algorithms” are essentially meaningless; a cryptographic module is either validated or it’s not.
A system built with a FIPS 140-2 module does not mean the system is secure. Nor does it mean that a system built with a non-validated module is insecure. FIPS 140-2 only evaluates the cryptographic module, nothing more.
Is the private key under n out of m multi-person control? If yes, provide n and m (two person control is a special case of n out of m, where n = m = 2)?
Who has control of your private keys? All of your admins? Only a few? Is your server outsourced? In “the cloud?”
Anyone with OS-level access to the server generally has access to the private keys.
In the most secure cryptographic modules, the admin may require two factor authentication to export or modify private keys. Or, the module may require two admins present.
Is the private key escrowed?(8) If so, who is the escrow agent, what form is the key escrowed in (examples include plaintext, encrypted, split key), and what are the security controls on the escrow system?
I can only imagine this being the case in a strict regulatory environment. While rare, if it does apply to you, basically you need to ask all of these questions about the escrow system itself.
Is the private key backed up? If so, who is the backup agent, what form is the key backed up in (examples include plaintext, encrypted, split key), and what are the security controls on the backup system?
Is the private key archived? If so, who is the archival agent, what form is the key archived in (examples include plaintext, encrypted, split key), and what are the security controls on the archival system?
Backups are critical. We all know this. But be sure your backups are being treated with the same security precautions as your live data. Printing all your webserver configurations on paper for purposes of disaster recovery might be be very useful in a total disaster, but if you’re including key material, the risk may outweigh the benefit.
If you encrypt your backups, that’s better, but where is that key kept?
Under what circumstances, if any, can a private key be transferred into or from a cryptographic module? Who is permitted to perform such a transfer operation? In what form is the private key during the transfer (i.e., plaintext, encrypted, or split key)?
How is the private key stored in the module (i.e., plaintext, encrypted, or split key)?
Who can activate (use) the private key? What actions must be performed to activate the private key (e.g., login, power on, supply PIN, insert token/key, automatic, etc.)? Once the key is activated, is the key active for an indefinite period, active for one time, or active for a defined time period?
Who can deactivate the private key and how? Examples of methods of deactivating private keys include logging out, turning the power off, removing the token/key, automatic deactivation, and time expiration.
Who can destroy the private key and how? Examples of methods of destroying private keys include token surrender, token destruction, and overwriting the key.
Again, understand how the module in question works, and who has access. If we’re talking about a run-of-the-mill Apache webserver, anyone with root on the box has full control of the cryptographic module. If this isn’t acceptable to you, take action. Employ some sort of SSL/TLS offload appliance, or use a hardware cryptographic module with strong key controls.
If you are stuck with a run-of-the-mill Apache server, take reasonable precautions. Make sure the key is protected on the box itself. Harden the box. Secure physical access to the box. Have a way to detect a breach of the box, and the ability to quickly revoke the certificate.
If you symmetrically encrypt your private key, who has access to this password? Is there someone always available with the password in case your server is rebooted?
Do not email or otherwise transmit your private key in an insecure fashion. The whole cryptographic system depends on it.
Provide the capabilities of the cryptographic module in the following areas: identification of the cryptographic module boundary, input/output, roles and services, finite state machine, physical security, software security, operating system security, algorithm compliance, electromagnetic compatibility, and self tests. Capability may be expressed through reference to compliance with a standard such as U.S. FIPS 140-1, associated level, and rating.
This is what you’re paying for when buying a FIPS 140-2 validated module. Depending on how these points are handled, the module might qualify for varying levels of FIPS 140-2 validation.
Most just take these capabilities of a crypto module for granted. We all tend to trust OpenSSL, right? Few of us are of the ability to challenge what a crypto module does. Fortunately, at least certain implementations of OpenSSL are validated, so we can be reasonably certain it is safe to use. But, as with most things in Information Security, it comes down to trust and accepting some level of risk.
In summation, no, you should not email me your private key.
Derived from Mike Tigas’ post from some years ago. Mostly here for my own reference, but enjoy anyway.
Basically, I needed some quick repeatable way of doing a wireless pcap on OS X, without futzing with Wireshark. Wireshark is great for analysis, but really is annoying to capture the packets.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Here’s a little Perl script I threw together to fetch the latest Bogon list from cymru.com and generate a Cisco IOS ACL blocking them as source. It’s a good ACL to have facing your upstream.
The script only depends on Socket, so you don’t need curl or wget Perl modules installed.
Also note, this is for Cisco IOS, not for PIX/ASA, which uses netmasks rather than wildcard masks.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57
We had a piece of client software which would open a persistent Oracle SQL*Net (Net8) connection to the server. However, due to the ways the users would work, often this connection would sit idle for very long times.
We began getting reports from our service desk that users at remote sites and users connecting over VPN would get a bizarre Oracle error and the application would crash. After much digging, we discovered the software would attempt to issue an Oracle command, and it seemed that the server would send a TCP RST in reply.
It turns out, however, that the server was not sending the RST—it was a PIX firewall sending this. The default behavior on Cisco PIX and ASA firewalls is to timeout an idle TCP connection after 1 hour. This essentially is a broken behavior, as the RFC 793 says nothing about the longevity of a TCP session or about keepalives. However, firewalls are a reality, so we need some way of getting around this.
Increasing the global parameter on the PIX or ASA firewall is an option, but on a very busy firewall, this could begin to use up resources, so I don’t recommend it. It might be an option to set this to eight hours on a fairly quiet firewall:
timeout conn 08:00:00
A more attractive option is to use TCP keepalives. Oracle by default does not send keepalives, and, in a Windows client environment, the OS only typically sends keepalives once every two hours. So, two changes are needed.
tnsnames.oraadd the atom
(DESCRIPTION)container. This will cause Oracle SQL*Net (or Net8) to ask the OS to send TCP keepalives.
In the Windows Registry, add a
HKEY_LOCAL_MACHINE\System\CurrentControlSet \Services\Tcpip\Parameters\KeepAliveTimewith a value of 1800000 decimal. This corresponds to 30 minutes.
I’m fairly certain a reboot is in order, as this is Windows.
When we applied this to all our mobile users, the problem went away completely.
Last night, I finally got greylisting working on my Postfix mail server. Briefly, greylisting delays mail from a particular sender-recipient-origin_ip triplet, forcing the origin server to resend after a time period. The theory is that spam servers generally do not follow the normal SMTP behaviour of retry.
Under Postfix, there are a number of policy servers used to accomplish this. I chose Postgrey because it was simple (didn’t require MySQL) and ran under Perl. Remember that my mailserver is a fairly slow box:
$ dmesg | grep cpu0 cpu0 at mainbus0: TMS390Z50 v1 @ 36 MHz, on-chip FPU cpu0: physical 20K instruction (64 b/l), 16K data (32 b/l): cache enabled $ uname -mnprs NetBSD mumblemumble 2.0.2 sparc sparc
Really, I wanted something that was written in C, but they were all very database heavy, and I do not see a reason for having an SQL database for any sort of mail server function for my tiny little domain.
Well, usually I wake up to a bunch of spam in my inbox, or at least a considerable amount in my spam folder (put there by SpamAssassin, a spam filter which incorporates Bayesian filtering, in addition to other content matching techniques).
This morning: no spam. Zero. Not even one spam message in my spam folder. I checked my mail logs, and there were numerous attempts, but they were all greylisted, and no retries were made.