An Auspicious Datetime in UNIX History

13 Feb 2009

Today, the UNIX timestamp will be 1234567890. You can see for yourself:

$ date -d '@1234567890'
Fri Feb 13 16:31:30 MST 2009


DNS Server Problems with Cisco 675/678 NAT

21 Jun 2008

While working on some DNS and web server configurations today, I discovered a bug (in my opinion) in he way that NAT is implemented in the Cisco 678 DSL router. From what I’ve read, it occurs in the 675 as well. I suspect that this bug would be found in all CBOS based devices.

My Cisco 678 is connected to a Linux server which provides firewall, proxy, DNS, DHCP and a bunch of other services to my internal network. There’s not much more than DNS which is visible to the outside world. I found that DNS requests for A records (address lookups) from the outside world coming through the Cisco 678 to my DNS server would always get the IP address of my DSL link and a TTL of 0. Other record types seemed unaffected (though, I never tested most RR types).

After some fiddling around with my DNS server, I realized that it was returning the right information. In other words, the data was being alteredchanged in transit. Since I am using NAT on the Cisco 678, I decided to look into the possibility that something was wrong there.

It turns out that the CBOS NAT implementation does not just translate IP addresses in the IP header, but will look at the entire payload of an IP packet, substituting it’s IP everywhere. Since the format of the IP address in a DNS response is the same as what is found in a nIP header, they were being translated on the way to the outside of my network.

A quick Google Search yielded a workaround, which I’ll describe here.

The Cisco 67x CBOS NAT implementation will not translate payload addresses if the packets are not on port 53. So, simple change the port to something else (like 5300) in a NAT entry, and your DNS lookup responses won’t be messed with. The syntax of the CBOS command to do just that is:

cbos#set nat entry add 5300 53 udp

In the workaround I found online, they never address the use of DNS over TCP. It doesn’t happen much, but it is possible for DNS requests to come over TCP rather than UDP (this usually only occurs for zone transfers and when a request produces such a large response that a single UDP datagram is too small to carry the answer back).So, I also ran:

cbos#set nat entry add 5300 53 tcp

After implementing the workaround, it didn’t work. I deleted the NAT entries from my Cisco 678, re-created them, wrote the memory, rebooted it at which point it started working for me. During this process, I also kept tcpdump monitoring for the traffic I wanted to see between the DSL router and my firewall box.


10 Jun 2008

Lamont’s Monitored Network Objects Protocol (a.k.a. LMNOP). Has a nice ring to it, don’t you think? OK … I’ve been thinking about the sucky things with SNMP, the Simple Network Management Protocol. One of the bigger problems, in my not so humble opinion, is the compleet lack of any security. I know, you can use the Community Strings to specify what people have access to. Several problems abound with this approach, but it boils down to a complete lack of basic security:

  1. No encryption; you just can’t do it.
  2. No authentication; you couldn’t do it securly, anyhow.
  3. Access control via publicly visible group-shared, non-credentialed ID; In other words, anyone on the network can detect the community names that are in use and then use them and there’s never going to be any way to stop them.
  4. In some cases, the ability to manage can not be disabled; this is a huge security hole.

Now, in defense of SNMP, one could use your firewalls to control packet flows and funnel them only towards the workstations you want. Though this can work in fixed environments, our network are becoming more and more fluid, dynamic and mobile every day, making this approach extremely difficult to maintain, at best. Another approach, often used in the real world, is to confine SNMP traffic to an issolated "management network", a physically separate network segment that is not interconnected with the rest of the network. One problem with this approach is that not all devices that one might want to monitor/manage have the ability to confine their SNMP activity to a particular port, especially those devices that aren’t network switches. SNMPv3, does have a user based access control mechanism (see RFC3414 (Standard 62) — User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3)), which does include provision to encrypt authentication related messages and an ACL mechanism RFC3415 (Standard 62) — View-based Access Control Model (VACM) for the Simple Network Management Protocol (SNMP)). These security features are an improvement, however, only DES (in CBC mode, which is good for DES) is available for encryption. There is no cryptographically strong message integrity, either. Nowhere in the rest of the SNMPv3 RFCs RFC3411(, RFC3412, RFC3413, RFC3416, RFC3417, RFC3418 and RFC3584), is there any mention of any better encryption or cryptographic protections. At the very least, the lack of adequate protection in the user authentication process is a security hole so large as to render all security mechanisms in its design useless. At worst, people using RFC3414 authentication could have a false sense of security and expose extensive details about the operations (as well as full administrative control) of their infrastructures to anyone who can transfer packets in their network. The long and short of all this? I simply avoid SNMP in almost all situations. You can’t make it secure for managing anything and most of the benefits of SNMP, except one, can be acheived in other ways with most devices. The one exception is that SNMP is one, central way of dealing with lots of stuff. The protocol has been around for a long time and, of the top of my head, I don’t know of any vendor’s SNMP capable device that is not compatible in some way. Thus, I’ve often thought a little about how we might improve SNMP. In the past, I’ve decided that it would have been much better if SNMP had originally stood for Simple Network Monitoring Protocol. Most of the security concerns would simply disappear. If we "remove" the management or write capabilities from the SNMP specification, then we would have just such a monitoring protocol. But that will still leave us with some pretty ugly security concerns, not to mention confusing people if we still called it SNMP. So, here’s my list of desirable features:

  • Encryptable
  • Authentication & Authorization Service
  • Monitoring Output Configuration
  • Multicast and/or Unicast
  • Mixed use of TCP and UDP

Once a protocol like this is standardized, we could build upon it to create SNMPS (or SSNMP?), a secured form of SNMP for management operations. I think it shouldn’t have the monitoring elements, as LMNOP would cover monitoring. In that case, devices and applications which want to use both monitoring and management features should implement support for both LMNOP and SNMPS. There might be a new RFC to write for this idea. I’ve never done that before. Perhaps I will. Whether I do or not, I thought the acronym was worth writing down.

‘leet’ Mail Server

28 May 2008

I thought it was a little bit funny to find this in today’s logwatch email from one of my servers:

——————— postfix Begin ————————

7118055 bytes transferred
1337 messages sent
1337 messages removed from queue

When maildrop Fills a Log File

30 Apr 2008

I hadn’t bothered looking at my personal email accounts since last Saturday. This evening, I was surprised to see that it looked like I wasn’t receiving emails for my OpenBrainstem or addresses. The last messages had come in sometime late Sunday morning.

First thing I did was to log into the mail server via SSH and run:

# mailq | grep '^[0-9A-F]' | wc -l

Well, that’s a wee bit of email. So I tried running this command (sorry, I didn’t capture that whole output):

# mailq | head
. . . output omitted . . .

The message I saw over and over again showed “(temporary failure. Command output: maildrop: signal 0x19)“. A quick Google search and the first link told me what I needed to know; when the log file that maildrop is writing into reaches over 50 million bytes (not 50MB, but 50MiB), it stops processing requests. Though the link Google found for me indicated a setup with one central log file, I’ve discovered that the same thing happens when you have per user log files, like I do. This line from my /etc/maildroprc file shows what I mean:

logfile "$HOME/mail/.maildrop.log"

So, I fixed it by truncating (or, in other words, emptying) my own user’s log file. Of course, I first checked to make sure that it was the culprit:

# ls -l ~lamontp/mail/.maildrop.log
-rw-------  1 lamontp lamontp 714630 Apr 30 20:37 /home/lamontp/mail/.maildrop.log
# >~lamontp/mail/.maildroprc