1. If you had to run a cluster of Node.js worker services (e.g. sending e-mails or generating PDF files) that were all consuming events from a message queue, how could you ensure that multiple nodes don't execute the same job? Where would you prefer that the jobs be stored? Assume you have a good budget.
A common problem in Message Queuing, and Competing Consumer pattern addresses this. We can use RabbitMQ for reliable work queue implementation, RabbitMQ assures that a message cannot be delivered to multiple consumers, it scales well, a 3 node RabbitMQ cluster can scale for 200K/sec requests.
2. If you had to create a Node.js proxy that would route requests based on the hostname to a specific Docker container, what modules would you use, and how long would it take you to have a working proof-of-concept service? Assume that the hostname -> container address and port are provided to you in a JSON file that does not change.
3-6 days assuming full-time(8hrs/day), anything less than that quality would be poor.
I would advise using products like F5 or NGINX OpenResty instead of re-inventing the wheel.
Task breakdown:
1 day - build couple of docker containers in my laptop and serve a simple node.js service that'll log requests to a Datastore
1-2 days - build the proxy service that'll do the routing
1-3 days - write some testcases to confirm that routing works as expected, bugfix, refactor and harden the router.