Public Void @f2prateek

Upgrading a 2010 PC

When I started University in 2010, I picked up a MacBook Pro as my primary laptop. I fell in love with the hardware, but I sorely missed being able to play my favourite games. Mid way through the semester, I decided to build a gaming PC. While building my own PC likely wasn’t cheaper, I enjoyed the flexibility they could offer. This post walks through how I upgraded the system over the years.

The 2010 build

My initial build was based on this 2010 build from Tom’s Hardware. The disc drive alone gives away the age of the hardware!

Intel Core i7 930 $280
Gigabyte X58A-UD3R $235
OCZ DDR3-1600 3x4GB $260
Western Digital Caviar Black 1TB $95
Lite-On BD-ROM Drive $66
XFX Radeon HD 5770 $150
Corsair TX750W $105
CoolerMaster Storm Scout ATX Tower $90
Total $1281

I wanted to splurge on core components such as the motherboard and CPU. These are a bit trickier to upgrade in isolation, and I wanted to get the best long term value out of them. However, I wish I’d skimped a bit on a couple of parts:

  • The Power Supply: 750 W is overkill for this build. My initial plan was to overclock some parts and run two GPUs with CrossFire. I ended up doing neither, the max power draw for this system stays well under 300W.
  • The RAM: 12GB may seem common place now, but it cost quite a premium in 2010, when 4-GB was sufficient even in high performance builds. Even today, I only use a single 8GB stick.

Upgrades

While the initial build is outdated, I’ve been able to keep the system relatively current with regular upgrades and able to play the latest games. This is where the flexibility of a custom built PC really shines. These are the parts I’ve upgraded, and why.

Corsair H60 CPU Cooler; $54.99; Aug 2011 I stuck with a stock cooler in my initial build to save money. However, the stock cooler loosened during my summer move and no amount of thermal compound seemed to fix unstable CPU temperatures. I decided to use this as an opportunity to upgrade to an aftermarket cooler. I’ve always had my eyes on a custom loop liquid cooling system, but the simplicity of an all in one cooler drew me in. For less than $60, the Corsair H60 provided great value - helping reduce the temperatures by 20°C under load.

ADATA SX900 128GB SSD; $89.99; Dec 2013 I had installed a SSD in my MacBook Pro a few months earlier, and was blown away by how fast my laptop felt after the upgrade. It felt like a no brainer to do the same for my desktop. SSD drives are significantly faster than traditional spinning disk hard drives, so any software installed on an SSD will typically launch much faster. SSD prices were still pretty high, so I opted for a lower capacity 128GB model as a boot drive, and relegated my older hard drive to function as a secondary drive for media. I was also paranoid about the drive giving away unexpectedly. SSDs are expected to have a shorter life span as you can only write data to it a finite number of times, so I made a bunch of software tweaks to minimize the write load on it and extend its life.

NZXT H440 ATX Mid Tower, $109.99; December 2014 Having previously built a PC in India, I wanted a case that would offer the best dust protection. Initially, the CoolerMaster Storm Scout seemed like a great choice - most of the air drawn in passes through filters that can be cleaned. But cleaning the filters was cumbersome and I rarely did it. One of the few times I did put in the effort, I ended up breaking the front panel. It also turns out that dust isn’t as big a problem in Canada as it as in India. So this time around, I opted for a sleeker looking case instead. The blue NZXT H440 fit the requirements perfectly - sleek muted look with a high capacity. Although this was mostly aesthetic to avoid having to look at the broken front panel, it elevated the look of the PC and made it feel more premium.

Corsair Hydro Series H100i v2 CPU Cooler, $95.99; Dec 2017 I’ve had terrible luck with CPU coolers! The first upgrade I made to my computer finally gave away when some of the liquid leaked from the H60 cooler (possibly due to my seventh move in 7 years). This caused its cooling performance to deteriorate significantly - so much so that the computer wouldn’t even boot. I was tempted by the NZXT Kraken-X62, but the H60 did last me for 6 years, so I decided to stick with Corsair. I picked the Corsair H100i v2 with its easy installation and solid performance. With an even bigger radiator than the H60, it allowed my CPU to run cooler than ever before.

Asus Phoenix PH-GTX1050Ti, $223.99; Jan 2018 The Radeon HD 5770 was the the biggest bottleneck in my 2010 build, but it was sufficient for my needs in 2010 as I mostly played games that were not GPU intensive (such as FIFA, Counter Strike and Starcraft). My original plan to pick up a second 5770 and run two GPUs in CrossFire. 7 years later, this GPU was beginning showing it’s age (I could barely get 10 FPS for Hitman) and I knew I needed to upgrade. However, the mining craze made finding a second 5770 impossible. Rather than wait for prices to new supply, I jumped the gun and picked up a 1050Ti, which was still offered a reasonable performance boost over the 5770. I also wanted to try a Nvidia GPU, having exclusively used AMD ones all my life. I ended up returning the Gigabyte version I picked up the first time around due to instability, but have been quite happy with the Asus version. I flirted with the idea of going all out and picking up a 1080Ti, but ultimately decided against it. I don’t think I could have could picked up a much better GPU without hitting CPU and memory bottlenecks.

Corsair Vengeance Pro DDR3 1x8GB 2400Mhz, $99.99; Feb 2018 In January 2017, I noticed that the system was detecting just one stick of RAM, which left me with only 4GB of usable RAM. I purchased a new RAM kit to see if it was a problem with the RAM, so I think it’s likely a CPU or motherboard issue. Based on past usage, I knew 8 GB of RAM would be plenty for me, so I ended up buying a single 8GB stick to get the most use out of the DIMM slot that’s still working. This is one part I didn’t do a ton of research one - my criteria was to find a kit of DDR3 RAM with the highest capacity possible on a single stick from a trusted manufacturer. The Vengeance Pro checked all those boxes.

Crucial MX500 500GB SSD, $99.99; Dec 2018 While the SX900 continues to perform admirably, its capacity has aged a lot. For comparison, FIFA 13 required 8GB, but FIFA 18 requires a whopping 45GB. This meant that I was quickly running out of space after just a couple of games. I would have loved to pick up an M.2 drive, but didn’t quite want to upgrade my motherboard just yet. The Crucial MX500 was a top Wirecutter pick, and I got lucky with my timing as it was just around Cyber Monday.

What’s Next

10 years in, this is still a remarkable system capable of running applications at high performance and playing most latest games at 60 frames per second on medium-high settings. This has outlasted all of my other computers by a mile. I picked up parts in the initial build from NCIX, and this even outlasted them!

However, I’ve been getting more and more into competitive games like Rocket League and Rainbow Six, and with 144Hz monitors becoming mainstream, I’ve been craving a system capable of pushing such a high FPS - over double what this system can do today. Small form factor builds have caught my eye as well - I’m curious to see how much power can be packed into these cases. This was also be a good opportunity to upgrade the motherboard and finally take advantages of the latest technologies such as the Z370 chipset, M.2 drives and DDR5 RAM. I’ve upgraded to this build, and hope to write about this in the future.

Repurposing a Six Year Old Kindle

Last year, I upgraded to one of the newer PaperWhite models. My 2012 Kindle Touch was starting to show its age. However, since the display was still functional, I was interested in seeing if I could repurpose it. After looking around for inspiration, Paul Stamatiou’s Raspberry Pi photo frame caught my eye. The Kindles’ e-ink display actually makes it a great fit for a photo frame — it can keep going for months without a charge. Here’s how I did it:

Step 1: Jailbreaking the Kindle

To extend the Kindle’s functionality beyond what is originally meant to do, you’re going to have Jailbreak it. I followed the process from the Yifan Lu’s blog (if you have a different Kindle, check out the MobileRead forums). This may seem daunting, but it was honestly the easiest part.

  1. Download the jailbreak files.
  2. Copy data.tar.gz to the root folder of your Kindle.
  3. Restart the Kindle.

After restarting, you should see the message “You are Jailbroken” appear (if you do not see this, refer to Yifan’s post on further instructions).

Step 2: Install the Kindle Screensaver Hack

With the Kindle jailbroken, I decided to go with a simple screensaver hack that allows you to display custom pictures. Similar to the jailbreak hack, you’ll need to download the files, copy it to your root and restart the kindle.

Step 3: Prepare your images

To use custom images with the screensaver hack, you’ll need to follow a strict set of rules.

  • Each image must be a grayscale PNG that is 600 × 800. I used this tool to prepare the images.
  • Each image must be named bg_xsmall_ss##.png, where ## is a two digit number from 00 to 99.
  • The image numbers start at 0, and must be sequential (i.e. you cannot skip a number).

Once the images were ready, I copied them to the screensavers directory on my Kindle.

Step 3: Optional: Gut the Kindle

Before putting the Kindle in a frame, I decided to strip away any unnecessary parts. Following the iFixit teardown guide, I stripped the Kindle down completely to see what I could remove.

gutted-kindle-1

In the end, I removed just the front and back bezels, and the plastic 3G placeholder. After putting it back together, here’s what it looked like.

gutted-kindle-2

Step 4: Frame the Kindle

Framing the Kindle was the trickiest part. For the first version, I picked up a simple black frame. I opened up the frame, lined up the display and stuck it in place with electrical tape. I drilled some holes in the bottom to allow room for a charging cable and copying over new images. It wasn’t the prettiest (I forgot to take a picture of the back), but it did the job.

frame-v0

Once I had used this hack for a few weeks, I had a better idea of what I wanted. For the final frame, I had a custom framing store build me one.

frame-v1

Having the frame built professionally ended up being a great idea. They built a channel into the bottom for a micro usb cable to slip through, and a door to access the Kindle battery in case I ever needed to replace it.

frame-v1

frame-v1

What’s next?

I’ve been really enjoying the the Kindle photo frame. It fits in perfectly on our photo wall, and I love that I can change the images every once a while. There are a couple of improvements I’d like to pursue in the future:

  • Prevent the Kindle’s home UI elements from bleeding through.

  • The screensaver only rotates images when you turn the display on and off. It would be great to script this so it rotates them on a schedule. In theory, this should also fix the bleeding issue.

  • Adapt the code from Kindle weather display project to remotely provide images instead of having to manually download the images. I started going down this route, but was unable to get the client side python code to work.

Unwrapping data with Retrofit 2

Retrofit is my library of choice when communicating with HTTP services in Java. One of my favourite features in Retrofit 2 is it’s Converter (and more specifically it’s new counterpart — Converter.Factory) API.

Envelopes

Let’s take a common use case — when APIs wrap the data they return in an envelope type. For instance, Foursquare’s API returns the following JSON structure:

{
  "meta": ...,
  "notifications": ...,
  "response": ...,
}

This JSON can be represented simply as a Java type.

class Envelope<T> {
  Meta meta;
  Notifications notifications;
  T response;
}

And naively, our API declaration could use this envelope directly.

interface FoursquareAPI {
  @GET("/venues/explore")
  Call<Envelope<Venues>> explore();
}

But wouldn’t it be better if you could ignore this envelope, and work with your desired types directly? Your client wouldn’t have to know about the Envelope type at all, and not have to worry about unrwapping it manually.

interface FoursquareAPI {
  @GET("/venues/explore")
  Call<Venues> explore();
}

Converter

In Retrofit, a Converter is a mechanism to convert data from one type to another. Retrofit doesn’t ship with any converters by default, but provides modules backed by popular serialization libraries. We’ll lean on these modules for the heavy lifting, but write some custom code to get our desired behaviour.

First, we write a converter that first parses the data as an Envelope<T> object by delegating the work to another converter. Once the data is parsed, our converter extracts our desired response from the Envelope object.

class EnvelopeConverter<T> implements Converter<ResponseBody, T> {
  final Converter<ResponseBody, Envelope<T>> delegate;

  EnvelopeConverter(Converter<ResponseBody, Envelope<T>> delegate) {
    this.delegate = delegate;
  }

  @Override
  public T convert(ResponseBody responseBody) throws IOException {
    Envelope<T> envelope = delegate.convert(responseBody);
    return envelope.response;
  }
}

Converter Factory

When we create our Retrofit instance, we can give it Converter.Factory instances. Retrofit will look up these factories (in order) and ask them to return a converter if they can deserialize to a given Java type.

We’ll need to create a custom factory that returns our EnvelopeConverter to let Retrofit know about our custom converter. Our custom factory will also ask Retrofit to give us the “next” converter that would have deserialized the Envelope<T> type if our custom converter didn’t exist. This is the converter that the EnvelopeConverter delegates to.

class EnvelopeConverterFactory extends Converter.Factory {
  @Override
  Converter<ResponseBody, ?> responseBodyConverter(
      Type type,
      Annotation[] annotations,
      Retrofit retrofit) {
    Type envelopeType = Types.newParameterizedType(Envelope.class, type);
    Converter<ResponseBody, Envelope> delegate =
        retrofit.nextResponseBodyConverter(this, envelopeType, annotations);
    return new EnvelopeConverter(delegate);
  }
}

Putting it all together

Armed with our factory, we can create our Retrofit instance. We still need to supply our “next” converter (Moshi in this example) that our EnvelopeConverter will use to deserialize the Envelope<T> type.

Retrofit retrofit = new Retrofit.Builder()
        .baseUrl("http://localhost:1234")
        .addConverterFactory(new EnvelopeConverterFactory())
        .addConverterFactory(MoshiConverterFactory.create())
        .build();

After wiring this all up, we can use our simplified API. And it’ll pay dividends as the complexity of our apps and API grow

interface Service {
  @GET("/venues/explore")
  Call<Venues> explore();
}

If you’d like to see a complete working example, this is the approach we’ve used in our JSON-RPC client powered by Retrofit.

Beyond Converters

Converters barely scratch the tip of the surface when it comes to customizing Retrofit’s functionality. If you’d like to hack around even more, I’d recommend checking out Retrofit’s CallAdapter API, which powers functionality such as it’s RxJava integration, and other use cases you could previously only dream of.

Powering up HTTP clients with Train

Middlewares make it easy to write independent and reusable modules for HTTP servers. For instance, we’ve abstracted common reporting with our statsd and logger middlewares. This makes it easy for new services to be built with reliable reporting from day one.

h1 := statsd.New(stats)(app)
h2 := logger.New()(h1)
http.ListenAndServe(":12345", h2)

Although easy enough to write it ourselves, Alice is a tiny library that makes it easy to chain multiple middlewares.

chain := alice.New(logger.New(), statsd.New(stats))
http.ListenAndServe(":12345", chain.Then(app))

While working on sources, we realized we needed a similar solution for writing http clients. At their core, sources are pretty simple. Sources make http requests to a third party service, translate the responses into our warehouses format, and forward it to our warehouse objects API. A majority of the errors are from failing HTTP requests. To have complete visibility into the performance of sources, we needed to be able to record metrics and log activity of the HTTP client across multiple codebases.

Borrowing from OkHttp’s interceptors, I wrote Train. At it’s core, train takes a series of Interceptors and combines them to return a RoundTripper that runs the interceptors in order.

Powerful

Interceptors enable endless use cases - they can observe, modify and even retry calls.

Observing Requests and Responses

Interceptors can continue the chain as is and observe outgoing requests and incoming responses. This is useful for reporting purposes, such as logging and stats.

func Dump(chain train.Chain) (*http.Response, error) {
  req := chain.Request()
  fmt.Println(httputil.DumpRequestOut(req, true))

  resp, err := chain.Proceed(req)
  if err != nil {
    return nil, err
  }
  fmt.Println(httputil.DumpResponse(resp, true))

  return resp, err
})

Modifying Requests

Interceptors can modify outgoing requests. For example, you can compress the request body if your server supports it.

func Compress(chain train.Chain) (*http.Response, error) {
  req := chain.Request()

  contentEncoding := resp.Header.Get("Content-Encoding")
  if resp.Body != nil && contentEncoding != "" {
    z, err := zlib.NewReader(req.Body)
    if err != nil {
      return nil, err
    }
    req.Body = z
    req.Header["Content-Encoding"] = "zlib"
  }

  return chain.Proceed(req)
})

Modifying Responses

Interceptors can modify incoming responses. Similar to above, you can decompress the response body before your application processes the response.

func Decompress(chain train.Chain) (*http.Response, error) {
  req := chain.Request()
  resp, err := chain.Proceed(req)
  if err != nil {
    return nil, resp
  }

  contentEncoding := resp.Header.Get("Content-Encoding")
  if resp.Body != nil && contentEncoding == "zlib"  {
    z, err := zlib.NewReader(resp.Body)
    if err != nil {
      return nil, err
    }
    resp.Body = z
  }

  return resp, err
})

Short Circuiting

Interceptors can short circuit the chain — this makes it great for testing.

func Short(train.Chain) (*http.Response, error) {
  return nil, errors.New("somebody set up us the bomb")
})

Pluggable

Like HTTP server middlewares, interceptors make it easy to share common HTTP client logic. Interceptors can be plugged into HTTP client as a transport.

transport := train.Transport(logger.New(), statsd.New(stats))
client := &http.Client{
  Transport: transport,
}

This also makes it easy to plug into client libraries built by other developers. For instance, we plugged in our stats interceptor to the Intercom Go library and had logs and metrics for free without needing to modify the source.

t := train.Transport(logger.New(), statsd.New(stats))

return &interfaces.IntercomHTTPClient{
  Client: &http.Client{
		Transport: t,
  },
}

Chainable

Interceptors build upon the Chain interface.

Interceptors are consulted in the order they are provided. You’ll need to decide what order you want your interceptors to be called in.

For example, this chain will record stats about the compressed request and response. The stats interceptor is invoked after the compression interceptor compresses the request and before the compression interceptor decompresses the response.

transport := train.Transport(compress, log, stats)

The second example will record stats about the decompressed request and response. The stats interceptor is invoked before the compression interceptor compresses the request and after the compression interceptor decompresses the response.

transport := train.Transport(log, stats, compress)

Extensible

Train is designed to be extensible. We’ve been using it for a while to power up the standard library http client in our Go sources — including adding logging, fixing server errors, and collecting stats for our invaluable Datadog dashboards.

Try it out and let me know what you think!

Gzip

This week we enabled gzip for our mobile libraries. Gzip is a compression format widely used in HTTP networking. With gzip we saw over 10x reduction in the POST request body to upload our batched event data.

Our Tracking API uses (mostly) vanilla Go. Enabling gzip decompression was a breeze using the compress/gzip package (thanks Amir).

func (s *Server) handle(w http.ResponseWriter, r *http.Request)
  encoding := r.Header.Get("Content-Encoding")
  if encoding == "gzip" {
  	z, err := gzip.NewReader(r.Body)
  	if err != nil {
  		http.Error(w, "malformed gzip content", 400)
  		return
  	}

  	defer z.Close()
  	r.Body = z
  }
  ...
}

On Android, we can take advantage of GZIPOutputStream from the Java standard library.

void post(byte[] data) throws IOException {
  URL url = new URL("https://api.segment.io/v1/import");
  HttpURLConnection conn = (HttpURLConnection) url.openConnection();
  conn.setRequestProperty("Content-Encoding", "gzip");
  conn.setRequestProperty("Content-Type", "application/json");
  OutputStream os = conn.getOutputStream();
  OutputStream gzipped = new GZIPOutputStream(os);
  gzipped.write(data);
  ...
}

Adding iOS support was the most challenging of the three. There are no standard library APIs for gzipping data, so we pulled in the relevant code from Nick Lockwood’s implementation on Github. The final snippet is tiny and fits perfectly as an NSData extension.

#import "NSData+GZIP.h"

- (void)sendData:(NSData *)data
{
    NSMutableURLRequest *urlRequest = [NSMutableURLRequest requestWithURL:@"https://api.segment.io/v1/import"];
    [urlRequest setValue:@"gzip" forHTTPHeaderField:@"Content-Encoding"];
    [urlRequest setValue:@"application/json" forHTTPHeaderField:@"Content-Type"];
    [urlRequest setHTTPMethod:@"POST"];
    [urlRequest setHTTPBody:[data gzippedData]];
    ...
}

Implementing it across our different code bases was surprisingly easy. We were up and running from discussion to implementation within a day’s worth of work across our server and mobile libraries. And the savings definitely made it worth our time!