Tuesday, May 22, 2018

An Easy Home-Brew Stratum-1 GPS-Disciplined NTP Server

This here is my latest, and simplest, home brew, GPS-disciplined, NTP server: "Candleclock" a.k.a. "O-4". If you wanted to leave out the battery-backed-up real-time clock and the LCD display, which are really just convenience features, you could put one of these together on your lunch break out of a Raspberry Pi (here, a 3 B+)

Candleclock

a NaviSys GR-701W USB GPS receiver which supports 1PPS

NaviSys Technology GR-701W

and some open source software. (The guy that sells the GR-701W on Etsy is the project manager of the NTPsec open source project!) The completely optional beautiful clear plastic stand is a leaflet holder from the local office supply store.

Wednesday, April 25, 2018

How do we do it? Volume.

Here's another fun exercise from this evening class on the science, nature, and philosophy of time that I'm taking. (The instructor attributes this to Nobel prize winning physicist Kip Thorne's 1994 book Black Holes & Time Warps.)

Get three meter sticks: that's a measuring stick like a yard stick, but it's one meter long, it's divided up into one hundred centimeters, and every centimeter is divided up into ten millimeters.

Lay the first stick down on the table. There are one thousand (1,000 or 103) linear millimeters along the length defined by that stick.

Lay the second stick down on the table at a ninety degree angle to the first so that their corners touch. There are one million (1,000,000 or 106) square millimeters in the area bordered by those two sticks.

Lay the third stick orthogonally so that it stands straight up from the table and its corner touches the corners of the first two sticks. There are one billion (1,000,000,000 or 109) cubic millimeters in the volume bordered by those three sticks.

See, you thought a billion was big, didn't you? But here we are, with a billion easily measurable things just in the space on the table right in front of us.

(I was forced - FORCED, I tell you - to order three inexpensive wooden meter sticks from Amazon just so I can do this in real-life.)

Tuesday, April 24, 2018

Using the Open Secure Socket Layer Library in C

(Updated 2018-05-01.)

I've used the tools provided by Open Secure Socket Layer (OpenSSL), as well as third-party tools like cURL that supports OpenSSL and other SSL/TLS utilities like ssh(1) and scp(1), for years. And I've obviously spent a lot of time with hardware entropy generators, which provide the randomness crucial to using encryption securely. But I've never used the OpenSSL C API. This, despite the fact that much of my work is in the embedded realm which is very C or C++ centric.

I thought it was time to remedy that. So, in the spirit of learning by doing, I wrote a C library - Codex - that somewhat simplifies the use of encryption in communication streams for the kinds of C code I am frequently called upon to write.

But first, a disclaimer.

Cryptography, a vast complex field of endeavor, is in no way my area of expertise. I am merely a humble product developer. This article is a little bit on how Codex works, and little on how OpenSSL works, and what my experience was developing against the OpenSSL library.

Using Codex is only a little simpler than using OpenSSL.

The amount of code involved for even a simple functional test using the simpler Codex API is substantial. So instead of including code snippets, I'll just point you to the simplest examples in the repository on GitHub.

Here is a simple server program written using Codex: unittest-core-server.c.

Here is a client program that talks to the server: unittest-core-client.c.

There are a lot of flavors of OpenSSL and they are all a little different.

Codex compiles and runs with
  • OpenSSL 1.0.1, the default on Raspbian 8.0 "jessie";
  • OpenSSL 1.0.2g, the default on Ubuntu 16.04.3 "xenial",
  • OpenSSL 1.1.0, the default on Raspbian 9.4 "stretch";
  • BoringSSL 1.1.0Google's substantially different fork of OpenSSL; and
  • OpenSSL 1.1.1, which was the current development version at the time I wrote Codex.
The complete suite of Codex unit and functional tests successfully ran on all of these versions. And for a few of the targets I used, I successfully ran the functional tests with different versions communicating across platforms. But the underlying C code in Codex is littered with conditional #if statements selecting the appropriate code for the various OpenSSL versions, because of differences in the evolving OpenSSL C API.

In the end, it wasn't a big deal, but it was a learning experience. And my intuition tells me that some porting will be necessary with each subsequent OpenSSL release.

OpenSSL is just one of many implementations of the Transport Layer Security (TLS) standard - which has replaced the original SSL, even though I'll continue to use the term SSL in this article. TLS itself is a moving target. TLS is defined in a growing number of Request for Comment (RFC) Internet standards documents. Architects of the TLS standard, and the developers of the libraries and tools which implement it, are in a kind of cold war with other actors on the Internet, some of whom are supported by nation states, terrorist organizations, or criminal enterprises, most with nefarious intent.

I ran Codex on several different targets and platforms.

I ran the Codex unit and functional tests with OpenSSL on the following targets and platforms.

"Nickel"
Intel NUC5i7RYH (x86_64)
Intel Core i7-5557U @ 3.10GHz x 8
Ubuntu 16.04.3 LTS "xenial"
Linux 4.10.0
gcc 5.4.0

"Lead" or "Copper"
Raspberry Pi 3 Model B (64-bit ARM)
Broadcom BCM2837 Cortex-A53 ARMv7 @ 1.2GHz x 4
Raspbian 8.0 "jessie"
Linux 4.4.34
gcc 4.9.2

"Bronze"
Raspberry Pi 2 Model B (32-bit ARM)
Broadcom BCM2836 Cortex-A7 ARMv7 @ 900MHz x 4
Raspbian 8.0 "jessie"
Linux 4.4.34
gcc 4.9.2

"Cobalt"
Raspberry Pi 3 Model B (64-bit ARM)
Broadcom BCM2837 Cortex-A53 ARMv7 @ 1.2GHz x 4
Raspbian 9.4 "stretch"
Linux 4.14.30
gcc 6.3.0

I did not run all OpenSSL versions on all targets and platforms. I did successfully run the server side functional test on the NUC5i7 Ubuntu x86_64 platform with the client side functional test on each of the Raspberry Pi Raspbian 32- and 64-bit ARM platforms; I felt this would be a fairly typical Internet of Things (IoT) configuration.

OpenSSL is broadly configurable, and that configuration matters.

Using TLS isn't just about using one algorithm to encrypt your internet traffic. It's about using a variety of mechanisms to authenticate the identify of the system with which you are communicating, encrypting the traffic between those systems, and protecting the resources you are using for that authentication and encryption against other actors. It's a constantly evolving process. And because every application is a little different - in terms of its needs, its available machine resources, etc. - there are a lot of trade-offs to be made. That means a lot of configuration parameters. And choosing the wrong values for your parameters can mean you burn a lot of CPU cycles for no reason, or that you leave yourself open to be hacked or spoofed by a determined adversary.

Codex has a number of OpenSSL-related configuration parameters. The defaults can be configured at build-time by changing a Makefile variables. Some of the defaults can be overridden at run-time by setters defined in a private API. Here are the defaults I chose for Codex, which gives you some idea of how complex this is:
  • TLS v1.2 protocol;
  • RSA asymmetric cipher with 3072-bit keys for encrypting certificates;
  • SHA256 message digest cryptographic hash function for signing certificates;
  • Diffie-Hellman with 2048-bit keys for exchanging keys between the peer systems;
  • The symmetric cipher selected for encrypting the data stream is limited to those that conform to the Federal Information Processing Standard 140 Security Requirements for Cryptographic Modules (FIPS-140), a U. S. government computer security standard that is a common requirement in both the federal and commercial sectors.
I am pretty much depending entirely on the expertise of others to know whether these defaults make sense.

You may have to rethink how you write applications using sockets.

There are some simple helpers in Codex to assist with using the select(2) system call, which is used to multiplex sockets in Linux applications. (If you prefer poll(2) to select(2) you're mostly on your own. But on modern Linux kernels, select(2) is implemented using poll(2). I've used both, but have some preference for the select(2) API.) Multiplexing OpenSSL streams is more challenging than it might seem at first.

As the Codex functional tests demonstrate, I've multiplexed multiple SSL connections using select(2) via the Diminuto mux feature. (Diminuto is my C systems programming library upon which Codex - and many other C-based projects - is built.) But in SSL there is a lot going on under the hood. The byte stream the application reads and writes is an artifact of all the authentication and crypto going on in libssl and libcrypto. The Linux socket and multiplexing implementation in the kernel lies below all of this and knows nothing about it. So the fact that there's data to be read on the socket doesn't mean there's application data to be read. And the fact that the select(2) call in the application doesn't fire doesn't mean there isn't application data waiting to be read in a decryption buffer.

A lot of application reads and writes may merely be driving the underlying SSL protocol and its associated state machines. OpenSSL doesn't read or write on the socket of its own accord; it relies on the application to do so by making the appropriate API calls. A read(2) on a socket may return zero application data, but will have satisfied some need on the part of OpenSSL. Zero application data doesn't mean the far end closed the socket.

Hence multiplexing isn't as useful as it might seem, and certainly not as easy as in non-OpenSSL applications. A multi-threaded server approach, which uses blocking reads and writes, albeit less scalable, might ultimately be more useful. But as the functional tests demonstrate, multiplexing via select(2) can be done. It's just a little more complicated.

I should especially make note that my functional tests pass a lot of data - several gigabytes - and the application model I'm using almost always has data that needs to be written or read on the socket. What I don't test well in Codex are circumstances in which the SSL protocol needs reads or writes on the socket but the application itself doesn't have any need to do so. This can happen routinely in other patterns of application behavior.

Codex tries to deal with this by implementing its own state machines that can figure out when OpenSSL needs a additional read or a write from the API return codes. I'm not convinced I've adequately tested this, nor indeed have I figured out how to adequately test it.

Here is a state machine server program written using Codex: unittest-machine-server.c.

Here is a client program that talks to the server: unittest-machine-client.c.

One of the approaches I used to deal with some of the need to decouple sending and receiving, since either side of the communication channel may need to send or receive independently of the other just to drive the underlying OpenSSL state machines, is to make the sides of the server asynchronous with respect to one other by using queueing.

Here is a queueing server program written using Codex: unittest-handshake-server.c.

Here is a client program that talks to the server: unittest-handshake-client.c.

You'll have to learn about public key infrastructure and certificates.

Certificates are the "identity papers" used by systems using TLS to prove that they are who they say they are. But like identity papers like passports and drivers licenses, certificates can be forged. So a certificate for Alice (a computer, for example) will be cryptographically signed using a certificate containing a private key for Bob, whose public key is known. The idea is if you can trust Bob, then you can trust Alice. But what if you don't know Bob? Bob's certificate may be cryptographically signed using a certificate containing a private key for Dan, whose public key is known. And maybe you know and trust Dan. This establishes a certificate chain, as in a "chain of trust". The final certificate at the end of the chain is the root certificate, as in the "root of trust."

At some point the chain has contain a certificate from someone you know and trust. This is often the root certificate, but it might be one of the intermediate certificates. Typically this trusted certificate is that of a certificate authority (CA), a company or organization that does nothing but sign and distribute trusted certificates to system administrators to install on their servers. Whether you realize it or not, every time you access a secure server at, say, Amazon.com, using a Uniform Resource Locator (URL) - that is, you click on a web link that begins with https: - all this is going on under the hood.

All the stuff necessary to store, manage, revoke (because that's a thing), and authenticate certificates in a certificate chain is called a Public Key Infrastructure (PKI). Codex implements just enough PKI to run its unit and functional tests. All of the Codex root certificates are self-signed; Codex acts as its own certificate authority. This is okay for the functional tests, but is in no way adequate for actually using Codex in a real-life application.

The Makefile for Codex builds simulated root, certificate authority (CA), client, and server certificates for the functional tests. It is these certificates that allow clients to authenticate their identities to servers and vice versa. (The Makefile uses both root and CA certificates just to exercise certificate chaining.) 

These are the certificates that the Codex build process creates for testing, just to give you an idea of what's involved in a PKI:
  • bogus.pem is signed by root with an invalid Common Name (CN) identifying the owner;
  • ca.pem is a CA certificate for testing chaining;
  • client.pem is signed by the root certificate for client-side tests;
  • self.pem is a self-signed certificate with no root to test rejection of such certificates;
  • server.pem is signed by root and CA for server-side tests;
  • revoked.pem has a serial number in the generated list of revoked certificates;
  • revokedtoo.pem has a serial number in the list of revoked certificates;
  • root.pem is a root certificate.
Because a lot of big-time encryption is involved in creating the necessary PKI, public and private keys, and certificates, this step can take a lot of CPU time. This can take many minutes on a fast CPU, or tens of minutes or even longer on a slower system. Fortunately it only needs to be done once per system during the build process.

To run Codex between different computers, they have to trust one another.

When building Codex on different computers - like my Intel server and my Raspberry Pi ARM clients - and then running the tests between those computers, the signing certificates (root, and additional CA if it is used) for the far end have to be something the near end trusts. Otherwise the SSL handshake between the near end and the far end fails (just like it's supposed to).

The easiest way to do this for testing is to generate the root and CA credentials on the near end (for example, the server end), and propagate them to the far end (the client end) before the far end credentials are generated. Then those same root and CA credentials will be used to sign the certificates on the far end during the build, making the near end happy when they are used in the unit tests. This is basically what occurs when you install a root certificate using your browser, or generate a public/private key pair so that you can use ssh(1) and scp(1) without entering a password - you are installing shared credentials trusted by both peers.

The Codex Makefile has a helper target that uses ssh(1) and scp(1) to copy the near end signing certificates to the far end where they will be used to sign the far end's credentials when you build the far end. This helper target makes some assumptions about the far end directory tree looking something like the near end directory tree, at least relative to the home directory on either end.

Codex is deliberately picky about the certificates on the far end.

In addition to using the usual OpenSSL verification mechanisms, Codex provides an additional verification function that may be invoked by the application. The default behavior for accepting a connection from either the server or the client is as follows.

Codex rejects self-signed certificates, unless this requirement is explicitly disabled at build time in the Makefile or at run-time through a setter in the private API. This is implemented through the standard OpenSSL verification call back.

If the application chooses to initialize Codex with a list of revoked certificate serial numbers (this is a thing), Codex requires that every certificate in a certificate chain have a serial number that is not revoked. This is implemented through the standard OpenSSL verification call back.

Codex requires that either the Common Name (CN) or the Fully-Qualified Domain Name (FQDN), encoded in a certificate, match the expected name the application provides to the Codex API (or the expected name is null, in which case Codex ignores this requirement). The expected name can be a wildcard domain name like *.prairiethorn.org, which will have to be encoded in the far end system's certificate.

Codex expects a DNS name encoded in the certificate in a standards complaint fashion. Multiple DNS names may be encoded. At least one of these DNS names must resolve to the IP address from which the SSL connection is coming.

It is not required that the FQDN that matches against the expected name be the same FQDN that resolves via DNS to an IP address of the SSL connection. The server may expect *.prairiethorn.org, which could be either the CN or a FQDN entry in the client certificate, but the certificate will also have multiple actual DNS-resolvable FQDNs like alpha.prairiethorn.org, beta.prairiethorn.org, etc.

It is also not required that if a peer connects with both an IPv4 and an IPv6 address (typically it will), that they match the same FQDN specified in the certificate, or that both of the IPv4 and the IPv6 address matches. Depending on how /etc/host is configured on a peer, its IPv4 DNS address for localhost could be 127.0.0.1, and its IPv6 DNS address for localhost can legitimately be either ::ffff:127.0.0.1 or ::1. The former is an IPv4 address cast in IPv6-compatible form, and the latter is the standard IPv6 address for localhost. Either is valid.

If the peer named localhost connects via IPv4, its far end IPv4 address as seen by the near end will be 127.0.0.1 and its IPv6 address will be ::ffff:127.0.0.1. If it connects via IPv6, its far end IPv4 address may be 0.0.0.0 (because there is no IPv4-compatible form of its IPv6 address) and its far end IPv6 address will be ::1. The /etc/host entry for localhost may be 127.0.0.1 (IPv4), or ::1 (IPv6), or both.

Furthermore, for non-local hosts, peers don't always have control of whether they connect via IPv4 or IPv6, depending on what gateways they may pass through. Finally, it's not unusual for the IPv4 and IPv6 addresses for a single host to be given different fully-qualified domain names in DNS, for example alpha.prairiethorn.org for IPv4 and alpha-6.prairiethorn.org for IPv6; this allows hosts trying to connect to a server to be able to select the IP version by using a different host name when it is resolved via DNS.

Certificates can be revoked, because sometimes stuff happens.

Codex does not directly support signed certificate revocation lists (CRLs), nor the real-time revocation of certificates using the Online Certificate Status Protocol (OCSP). It will however import a simple ASCII list of hexadecimal certificate serial numbers, and reject any connection whose certificate chain has a serial number on that list. The Codex revocation list is a simple ASCII file containing a human readable and editable list of serial numbers, one per line. Here is an example.

9FE8CED0A7934174
9FE8CED0A7934175

The serial numbers are stored in-memory in a red-black tree (a kind of self-balancing binary tree), so the search time is relatively efficient.

So you can use the tools of your PKI implementation to extract the serial numbers from revoked certificates and build a list that Codex can use. OCSP is probably better for big applications, but I wonder if we'll see it used in small IoT applications.

OpenSSL is a lot faster than I expected.

Although generating the certificates during the Codex build for the functional tests takes a long (sometimes very long) time, and there is some overhead initially during the functional tests with the certificate exchange, once the keys are exchanged the communication channel uses symmetric encryption and that runs pretty quickly.

There are a number of scripts I used to do some performance testing by comparing the total CPU time of encrypted and unencrypted functional tests for various workloads and configurations. I then used R and Excel to post-process the data. This is even more a work in progress than the rest of this effort, but so far has yielded no surprising insights.

I also used Wireshark to spy on the encrypted byte stream between two systems running a functional test.

Just to verify that I wasn't blowing smoke and kidding myself about what I was doing, I ran a Codex functional test against an openssl s_client command. That verified that at least a tool I didn't write agreed with what I thought was going on.

There is a lot going on here.

But we might as well get used to it. Many service providers that support IoT applications already require that the communication channels between clients and servers be encrypted. That's clearly going to be the rule, not the exception, in the future.

Acknowledgements

I owe a debt of gratitude to my former Bell Labs office mate Doug Gibbons who has a deep well of experience in this topic of which he generously shared only a tiny portion with me.

Repositories

https://github.com/openssl/openssl

https://boringssl.googlesource.com/boringssl

https://github.com/coverclock/com-diag-codex

https://github.com/coverclock/com-diag-diminuto

References

D. Adrian, et al., "Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice", 22nd ACM Conference on Computer and Communication Security, 2015-10, https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf

K. Ballard, "Secure Programming with the OpenSSL API", https://www.ibm.com/developerworks/library/l-openssl/, IBM, 2012-06-28

E. Barker, et al., "Transitions: Recommendation for Transitioning the Use of Cryptographic Algorithms and Key Lengths", NIST, SP 800-131A Rev. 1, 2015-11

D. Barrett, et al., SSH, The Secure Shell, 2nd ed., O'Reilly, 2005

D. Cooper, et al., "Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile", RFC 5280, 2008-05

J. Davies, Implementing SSL/TLS, Wiley, 2011

A. Diquet, "Everything You've Always Wanted to Know About Certificate Validation with OpenSSL (but Were Afraid to Ask)", iSECpartners, 2012-10-29, https://github.com/iSECPartners/ssl-conservatory/blob/master/openssl/everything-you-wanted-to-know-about-openssl.pdf?raw=true

Frank4DD, "certserial.c", 2014, http://fm4dd.com/openssl/certserial.htm

V. Geraskin, "OpenSSL and select()", 2014-02-21, http://www.past5.com/tutorials/2014/02/21/openssl-and-select/

M. Georgiev, et. al., "The Most Dangerous Code in the World: Validating SSL Certificates in Non-Browser Software", 19nd ACM Conference on Computer and Communication Security (*CCS'12), Raleigh NC USA, 2012-10-16..18, https://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf

D. Gibbons, personal communication, 2018-01-17

D. Gibbons, personal communication, 2018-02-12

D. Gillmor, "Negotiated Finite Diffie-Hellman Ephemeral Parameters for Transport Layer Security (TLS)", RFC 7919, 2016-08

HP, "SSL Programming Tutorial", HP OpenVMS Systems Documentation, http://h41379.www4.hpe.com/doc/83final/ba554_90007/ch04s03.html

Karthik, et al., "SSL Renegotiation with Full Duplex Socket Communication", Stack Overflow, 2013-12-14, https://stackoverflow.com/questions/18728355/ssl-renegotiation-with-full-duplex-socket-communication

V. Kruglikov et al., "Full-duplex SSL/TLS renegotiation failure", OpenSSL Ticket #2481, 2011-03-26, https://rt.openssl.org/Ticket/Display.html?id=2481&user=guest&pass=guest

OpenSSL, documentation, https://www.openssl.org/docs/

OpenSSL, "HOWTO keys", https://github.com/openssl/openssl/blob/master/doc/HOWTO/keys.txt

OpenSSL, "HOWTO proxy certificates", https://github.com/openssl/openssl/blob/master/doc/HOWTO/proxy_certificates.txt

OpenSSL, "HOWTO certificates", https://github.com/openssl/openssl/blob/master/doc/HOWTO/certificates.txt

OpenSSL, "Fingerprints for Signing Releases", https://github.com/openssl/openssl/blob/master/doc/fingerprints.txt

OpenSSL Wiki, "FIPS mode and TLS", https://wiki.openssl.org/index.php/FIPS_mode_and_TLS

E. Rescorla, "An Introduction to OpenSSL Programming (Part I)", Version 1.0, 2001-10-05, http://www.past5.com/assets/post_docs/openssl1.pdf (also Linux Journal, September 2001)

E. Rescorla, "An Introduction to OpenSSL Programming (Part II)", Version 1.0, 2002-01-09, http://www.past5.com/assets/post_docs/openssl2.pdf (also Linux Journal, September 2001)

I. Ristic, OpenSSL Cookbook, Feisty Duck, https://www.feistyduck.com/books/openssl-cookbook/

I. Ristic, "SSL and TLS Deployment Best Practices", Version 1.6-draft, Qualys/SSL Labs, 2017-05-13, https://github.com/ssllabs/research/wiki/SSL-and-TLS-Deployment-Best-Practices

L. Rumcajs, "How to perform a rehandshake (renegotiation) with OpenSSL API", Stack Overflow, 2015-12-04, https://stackoverflow.com/questions/28944294/how-to-perform-a-rehandshake-renegotiation-with-openssl-api

J. Viega, et al., Network Security with OpenSSL, O'Reilly, 2002

J. Viega, et al., Secure Programming Cookbook for C and C++, O'Reilly, 2003