👭61.6% Yes! 👬
👭61.6% Yes! 👬
Amazon’s scale means it can cross-subsidize huge losses from different ventures, plowing profits back into businesses that work. The aim is not to make money on any particular service; Amazon likely lost $7.2 billion on shipping last year and is selling hardware supporting its virtual assistant Alexa around or below cost. It’s adding to the value of the system itself. Entire industries are loss leaders for Amazon. For companies that must make money on what they sell, it’s a terrifying prospect.
One of the systems I need for a site like Booko, is a queuing system. As requests to view books and their prices arrive, we queue up any book with stale prices, pushing their ISBN into Beanstalkd. These price refresh requests are picked up by one of many multithreaded price lookup processes. Beanstalkd and the daemons have been working together nicely since 2010.
Recently, however, I became aware that one of the many APIs we use to get prices had introduced a throttle, restricting the speed of incoming API requests resulting in 503 type errors.
Too many people wanted to know how much books cost.
The pricing daemon processes queue up and wait for requests, when they pick up a request, they spawn a thread per-shop. The threads look up prices and are then collected, the prices collated and saved to the database. The existing code expects to handle the entire lifecycle of the prices
Here's the current arrangement:
The API we're talking to requires us to serialise and throttle all the request which come to that shop. This specific API has a very handy feature I'll make use of - it allows submitting 10 lookups in a single query.
The solution I implemented was to make use of Redis and its Pubsub system. First, the price grabbing daemon pushes the ISBN into a Redis list.
Once that's in the queue, we subscribe to the channel '9781743535875' and wait for the response from Redis.
redis.subscribe_with_timeout(15, '9781743535875') do |on| on.message do |channel, msg| shop_data = JSON.parse(msg) price.update_attributes( shop_data ) unless shop_data.nil? redis.unsubscribe end end
When the message arrives, we parse the JSON data and update the price model with the data, then we unsubscribe from Redis.
The new daemon reads up to 10 ISBNS from the list and removes them from the list. It uses a pipelined command to ensure the two actions are atomic.
query_size = 10 redis.multi do isbns = redis.lrange "shop_queue", 0, (query_size - 1) redis.ltrim 'shop_queue', (isbns.length ), -1 end
The daemon then makes the API requests, looking up the data. After performing the lookup, the daemon publishes the data into the ISBN named channel.
Here's a diagram:
The complete setup is like this :ruby, rails, queuing
When you look at total installation costs:
Wind and solar are much cheaper. Not only is the fuel free and faces no regulatory risk — in the form of a carbon price — but the technology is simpler and quicker to install.
Australia's chief scientist Alan Finkel went one step further. He factored the extra costs of adding gas or battery backup to ensure stability or baseload power in the system.
Wind still came out cheapest, with solar only marginally more expensive than black coal.
How long until we see an ARM Mac?
It seems Facebook's advertising algorithms are quite good at categorising people. Next up, algorithms to determine if the categories should be able to be targeted by ads.
New Post over at the Booko Blog: