December 23, 2018 • ☕️ 3 min read
ARPANET was the network that became the basis for the Internet. ARPANET was developed under the direction of the U.S. Advanced Research Projects Agency (ARPA). Initially, ARPANET was a small, friendly community of a few hundred hosts. A single host file HOSTS.txt contained all the name to address mapping for every host connected to ARPANET. Our very own Unix host file /etc/hosts actually referred HOSTS.txt and deleted fields which UNIX didn’t use.
The new entries in this table was compiled, maintained and distributed. HOSTS.txt was maintained by Stanford Research Institute ‘s Network Information Center (SRI-NIC). ARPANET’s admins typically emailed their changes to NIC, and to sync new changes(just like we do git pull) FTP’ed to SRI-NIC and grabbed the current HOSTS.txt file.
When ARPANET started using tcp/ip in which we communicate with some other hosts comprises of a process called hostname resolution(layman terms: What is ip for my domain?).
Traffic : High network traffic to fetch new updates and process new entries. As ARPANET was scaling into many hosts there was sudden increase in amount of syncing and updates by hosts to stay updated with latest changes to HOSTS.txt file. As there was a single source of truth (SRI-NIC) traffic on their servers increased many folds.
Name Collisions: SRI-NIC could assign address which is unique but it had no control over hostnames being set. So anyone could enter host which conflicted with another name thus resulting in breaking of the system.
Consistency: A new record could have been added by the time an ARPANET admins system updated its HOSTS.txt file.
So in short HOSTS.txt wasn’t a scalable solution and ARPANET admins decided on some key problems they wanted to be solved in their new solution.
Don’t have a unified host-table system(SRI-NIC), so they wanted a distributed system with multiple hosts so the load on a single source is minimized. Prevent hostname collision and use a hierarchical namespace to assign names to a host (smells like trees!)
Paul Mockapetris then of University of southern california was responsible designed the initial architecture of DNS. The implementation was called JEEVES at that time. A later implementation was BIND(Berkeley Internet Name Domain) written by Kevin Dunlap for Berkely’s 4.3 BSD Unix. BIND is by far the most popular implementation of DNS and is maintained by the ISC(Internet System Consortium). BIND is a DNS server software which at that time ran on more than 50% of DNS servers.
BIND is used worldwide for the online publishing of DNS data and DNS query resolution. However, its popularity may have created fertile ground for threat actors due to a vulnerability which impacts multiple versions of the software latest being DoS(Denial of service) vulnerability. BIND after its last release BIND 10 has grown into an integrated authoritative DNS and DHCP server project called bundy.
Lets BFSly explores BIND’s latest release BIND 10 process architecture.
Core Components of BIND 10:-
Boss Of Bind:- Program written in python which handles startup, shutdown, and restarts of processes
msgq (Message Queue):- BIND 10’s Inter-process message bus which has JSON as its data exchange format.
cfgmgr (Config Manager):- BIND 10’s config manager capable of hot reloading of updated config without requiring restart of the BIND server
cmdctl :- Controls the server and does multiple things like fetching configuration for modules from cfgmgr and also authenticates users(client) connecting to BIND.
auth:- Responsible for handling AXFR requests(mechanisms available for administrators to replicate DNS databases across a set of DNS servers) and capable of scaling into multiple processes.
Stay tuned..more to come
The fault in my articles, they don't close themselves.