How is /etc/hosts bad? Let me count the ways.

/etc/hosts is a nice way to temporarily convince a host that certain DNS mappings exist, for testing, troubleshooting, or just temporarily working around oddities. However, I’ve seen a resurgence in using /etc/hosts for more than just temporary purposes. This, in my opinion, is bad.

I’ve always been a huge fan of tip #6 in “The Pragmatic Programmer:” Don’t Repeat Yourself. As soon as you repeat yourself you risk the different copies getting out of sync, which causes problems and confusion. Putting a fully-qualified domain name (FQDN) in /etc/hosts as well as in DNS means that at some point later in life the two will be out of sync.

“It’s only on a couple of hosts, for testing.”

First, if it’s on more than one host you are repeating yourself. Second, testing is fine but now it’s another thing to remember to fix when you roll into production. It also means that production will potentially be different than your test environment.

“We don’t need to put this in DNS.”

Why not? DNS is a database built to solve the host name to IP mapping problem and it’s good at it. Perhaps you have more of a political problem with whoever runs your DNS.

“You have automated tools that can maintain synchronized copies of /etc/hosts on multiple machines, so what’s the big deal?”

Just because we can doesn’t mean we should. Plus, you’re still repeating yourself. The machines can still get out of sync with each other and/or with DNS.

“If we don’t put this in DNS then it won’t get hacked.”

I’m pretty sure hackers know how to use IP addresses. This is security through obscurity, which doesn’t work.

“Well, they won’t know that the server is our database/app/web server if they can’t resolve the name.”

A quality port scanner can often tell what services are on what ports, even if you are running services on non-standard ports. In short, if you are relying on the lack of DNS to prevent hacking you’re in trouble.

If you really want you can use ACLs in BIND to restrict who can query certain DNS zones.

“I want entries in /etc/hosts for performance reasons.”

A caching name server on the host may also increase performance and still get its information from DNS, which does not violate the don’t-repeat-yourself clause.

“I want entries in /etc/hosts for reliability reasons.”

Again, perhaps a caching name server locally would fix the problem. And if you have unreliable DNS you are probably having other problems, too. Perhaps you should fix that.

“DNS is tricky to administer, and the files are simpler.”

Maybe, but if you have a service, application, or system that needs DNS entries you should probably figure it out. Eventually you’ll have to know something about DNS. Editing files is simple until you get more than one server, and then the effort to keep the hosts files synchronized is usually better spent keeping DNS up to date.

“We define host names to have different IPs using /etc/hosts, which is how we do load balancing.”

You can do the same thing with round-robin DNS entries.

“We define host names to have different IPs, based on the functionality the server needs. So ‘database’ is 192.168.10.20 if it only reads and 192.168.10.21 if it needs to write.”

That sounds very confusing. Perhaps you could just register ‘database-read’ and ‘database-write’ in DNS and teach your app which one to use?

“Hosts files are a proven, reliable technology.”

*sigh* So is DNS…

5 thoughts on “How is /etc/hosts bad? Let me count the ways.”

  1. I’ve always encouraged folks to not put all of their critical infrastructure in a virtual environment, for precisely that reason. I guess a hosts file could work in that case, but you’d have to ensure that everything you needed is in it, and then you have all the problems with keeping the file up to date, etc.

  2. There’s nothing wrong (in my opinion) with a virtual environment. In fact, properly utilized, you can get superior uptime through virtualization and clustering services, dns included

  3. Can’t argue with most of your facts… until your ISP blocks all DNS query to any server other then their own, and threatens to terminate your service because you hit certain sites too frequently. (and that was with a caching server in place)

    So I use the /etc/hosts files to ensure no hits to their DNS as I can maintain these files via a central copy on my web server, which is updated like a DNS based on TTL values and such.

    I know crazy solution… but when your dealing with crazy companies… you make do.

Comments are closed.