Heroku Cloud Application Development
上QQ阅读APP看书,第一时间看更新

Request routing in Heroku

In the current deployment stack, an app named myapp will have the default hostname of myapp.herokuapp.com. The herokuapp.com domain routes any request to Heroku via the routing mesh and provides a direct routing path to the corresponding web processes. This allows for advanced HTTP uses such as chunked responses, long polling, and using an asynchronous web server to handle multiple responses from a single web process.

The routing mesh is responsible for directing a request to the respective dyno in the dyno manifold. The routing mesh uses a round robin algorithm to perform the distribution of requests to the dynos as shown in the following diagram:

Request routing in Heroku

The HTTP request follows a timeout (30 sec) for the web process to return its response data to the client. If the web process fails to return the response within this time, the corresponding process log registers an error. After the initial communication, a larger (55 sec) time window is set for client or the server to send data. If neither of them sends it, the connection is terminated and an error is logged again. This scheme of rolling timeouts facilitates freeing up network connections when it is more likely that the communication will not occur in a reasonably practical time period. Less open connections mean better resource management and performance.

The Heroku stack also supports running multithreaded and/or asynchronous applications, which can accept many network connections to process client events. For example, all Node.js apps typically handle multiple network connections as per the process while handling client requests.

Cedar no longer has a reverse proxy cache such as Varnish, as it prefers to offer flexibility to developers to choose the CDN solution of their choice. Heroku's add-on architecture supports the enabling of such an option.

The execution environment - dynos and the dyno manifold

A dyno is like a virtual UNIX container that can run any type of process on the dyno manifold. The dyno manifold is a process execution environment that facilitates the execution of various dynos, which might cater to different client requests. The following diagram shows the various components of the Heroku execution environment:

The execution environment - dynos and the dyno manifold

The dyno manifold infrastructure provides the necessary process management, isolation, scaling, routing, distribution, and redundancy required to run the production grade web and the worker processes in a resilient fashion.

The manifold is a fault-tolerant, distributed environment for your processes to run in full stream. If you release a new application version, the manifold manages the restart automatically, hence removing a lot of maintenance hassle. Dynos are recycled everyday or when resource constraints force dyno migration or resizing. The manifold is responsible for automatically restarting any failed dynos. If the dynos fail again and again, the manifold doesn't immediately restart them but waits for a delta and then tries to restart them. You can try starting the processes manually using the heroku restart command.

You can set global parameters and certain configuration variables in the environment .profile file as it gets called before the manifold starts the dyno. Each dyno is self-contained in the sense that it runs a particular instance of a web or worker process and has access to sufficient resources to handle client requests.

The dyno does have some restrictions in terms of the memory it can expend. There is a ceiling of 512 MB per dyno, exceeding which causes the process to log memory overuse warnings in the log file. If the memory exceeds a threshold of 1.5 GB per dyno, the dyno is rebooted with a Heroku error. Usually, it is recommended to design and size your application to use a maximum of 512 MB memory. If the application exceeds this limit, there is potentially a memory leak.

Dynos can serve many requests per second, but this depends greatly on the language and framework used. A multithreaded or event-driven environment such as Java or Node.js is designed to handle many requests concurrently.

Dynos execute in complete isolation and protection from one another, even when they are on the same physical hardware. Each dyno has (albeit virtually) its own filesystem, memory, and CPU that it can use to handle its own client requests.

Dynos could be running in a distributed infrastructure. The access and routing is managed by the routing mesh internally and usually none of the dynos are statically addressable.

Dynos use LXC (lxc.sourceforget.net) as the foundation to provide a lightweight UNIX container-like behavior to execute different processes in complete isolation.