Dev Blog

Optimizing a Rails Controller for Page Load Efficiency and Better Search Results

Posted on 

Hi, welcome back to another installment of our PLUS QA dev blog series! To reintroduce myself, my name is Joe Friesen and I am a senior software engineer at PLUS QA.

In our previous blog entry, we described our efforts to improve the user experience and code quality of PLUS QA’s Device Lab: a feature of our Test Platform web application. We overhauled the naive React components relying on local state for data management and installed Redux Toolkit and RTK Query for fast client-side data fetching and caching. This will make things much easier to read and maintain and will minimize annoying front-end bugs.

In this installment, we turn our attention to the Rails back-end since the initial impetus for this project was to improve load times and usability, and when we look at our Rails controller index action, we’ll see there was plenty of room for improvement.

Rails Controllers and Performant Actions

We've now set up the front-end for success by offloading the burden of state management and API requests to the Redux Toolkit layer, allowing any child component within the DeviceLab component tree to access data and update actions via these RTK Query generated hooks. This will help immensely with the readability and maintainability of our code and will prevent all manner of front-end bugs. But this all started because our page load times for the Device Lab were over 20 seconds long. Streamlined state management is nice, but it's still going to take a significant amount of time before the Rails controller serves up the view that takes advantage of it.

Caption: Waiting for the Test Platform Device Lab to finish loading…

Refactoring Device Lab Controller

In our Device Lab initial page load, we have two issues that immediately become apparent -- first, we have a long wait until first paint in the browser, and when we actually look at the list, we see that we are in fact loading all of our Device Lab devices; hundreds in a single list. Personally, I believe that a better user experience here would be for the server-side rendered view to load as quickly as possible, then do client-side data fetching with loading spinners to get our device list. At the very least, even if we make no changes to make our list lookup more efficient (and we will!), the experience will be better for the user -- the page will render a basic view that then gives a loading feedback rather than just a blank page for multiple seconds (which just looks broken.)

A common issue across some of our controllers is the tendency to construct an index list or show entity ad-hoc within the controller action, which leads to non-DRY, brittle and difficult to maintain controllers. What we'd rather do is, for our index action, respond to an HTML request with just the bare minimum of information the Test Platform app needs to load (current_user, current_project, etc.) passed as props to the server-rendered React component, respond to a JSON request with our list of devices for the Device Lab table, and most importantly, offload the logic for getting that data to private controller methods or, better yet, let the Device model take care of how lists of Device entities are built.

Where previously we had a controller action many lines long, constructing several different instance variables meant to be passed as props to our React component, we instead get a much simpler, easier to read action:


# app/controllers/device_lab_controller.rb
class DeviceLabController < ApplicationController
  before_action :set_current_user, only: [:index]

  def index
    fetch_devices_list if request.format.json?

    respond_to do |format|
      format.json do
        render(json: { devices: @devices }, status: :ok)
      end
      format.html
    end
  end

  private

  def fetch_devices_list
    device_list = Device.filter(filter_params) # see Filtering, sorting and searching below
      .lab_devices
      .with_classifications
      .includes(:user)

    @devices = ActiveModelSerializers::SerializableResource.new(
      records,
      each_serializer: DeviceSerializer,
    ).as_json
  end
end

We abstract all of the logic for saying exactly what our devices list should be, leaving that up to the model, and let the controller just specify that it needs a device list. We do some filtering of devices to only grab devices that are eligible for checkout, eager load with_classifications and includes(:user) because we know we'll need device platform and OS information along with an associated user for devices that are checked out, and eager load that information here. At this point we also want to neutralize any N+1 query issues that may exist or may have been unwittingly created -- the bullet gem is invaluable here for keeping on top of issues as we do refactor work like this. We'll leave the further details for fetching, filtering and sorting our device list for later.

Paginating index list

Our initial list of mobile phones, tables, laptops, etc., is in the hundreds, and we have to sit and wait for that whole list to load before doing anything with our table. Maybe your brain, eyeballs, and monitor are all a lot bigger than mine, but for me, I certainly can't look at and hold information about 600+ devices in my head at once, it's a much better experience to just load a few devices and paginate the results instead.

The easiest way to chunk a list of records like this is with the pagy gem. After installing the gem, all we need to do is feed our device list to pagy, which will return the first page of records, along with metadata about the paging. Here's what our controller would look like after doing this.


class DeviceLabController < ApplicationController
  # app/controllers/device_lab_controller.rb
  include Pagy::Backend
  before_action :set_current_user, only: [:index]

  def index
    fetch_devices_list if request.format.json?

    respond_to do |format|
      format.json do
        render(json: { devices: @devices, paging: @paging }, status: :ok)
      end
      format.html
    end
  end

  private

  def fetch_devices_list
    @devices = Device.filter(filter_params) # see Filtering, sorting and searching below
      .lab_devices
      .with_classifications
      .includes(:user)
    pagingate_list
    serialize_list
  end

  def paginate_list
    @paging, @devices = pagy(@devices, items: 25)
  end

  def serialize_list
    @devices = ActiveModelSerializers::SerializableResource.new(records, each_serializer: DeviceSerializer).as_json
  end
end

Great, now when we make a JSON request to the index action, we'll get just the first 25 devices returned for our query. This alone brings the response time for this query to just 10% of the previous time. We now include the paging metadata in our response, which we can pass to table UI elements to indicate which page this data represents, how many items are in the page, etc. Then, to increment/decrement the page, we can include a page= query parameter in subsequent requests to the index action.

pagy is a great Ruby gem and I would argue nearly essential for any Rails web project -- if you're anything like me, you've tried to roll your own solution for this with much less neat results -- and can do a heck of a lot more than what we've done here, both on the front-end and back-end, check out the pagy documentation here.

Filtering, sorting and searching

There's one other important thing to note about our setup: the filtering, sorting and searching is being done entirely within the device_lab.jsx React component rather than in the API layer. A search component triggers a filter on the component's device list state, clicking on a column sorts that component state, etc. This to me is both a theoretical and practical issue.

The first is a code smell, we've failed to separate concerns, we should let the back-end handle parsing query parameters to sort and filter our results and let the JavaScript handle how we show that data to the user. But the second practical issue is, now when the front-end does these actions, it will only be sorting or filtering through a small sample of our total devices rather than all device lab devices. Suppose my first 25 loaded devices are all iOS phones, and I type Android in the search field -- the table will tell me we have 0 Android phones, which I know isn't correct. This necessitates that we move this business logic out of our components and into the back-end where it belongs in order to get accurate results.

The front-end changes are straightforward enough, instead of doing a setState update in our onChange handlers, we simply update a query parameter and let our RTK Query Redux slice send a new request to our index action, appending that parameter to our request. On the back-end, we will need to create a scope in our Device model to handle each possible filter value.


# app/models/concerns/filterable.rb
module Filterable
  extend ActiveSupport::Concern

  module ClassMethods
    def filter(filtering_params)
      results = where(nil)
      filtering_params.each do |key, value|
        results = results.public_send("filter_by_#{key}", value) if value.present?
      end
      results
    end

    def sorts(sorting_params)
      direction = ['asc', 'desc'].include?(sorting_params[:order]) ? sorting_params[:order] : 'asc'

      results = where(nil)
      results = results.public_send("sort_by_#{sorting_params[:sort]}", direction) if sorting_params[:sort].present?
      results
    end
  end
end

Now, when we include this concern in a model, it will attach filter and sorts as class methods to that model. The filter method will call all scopes defined in that model that begin with filter_by_ provided the matching key is present in our request params. Then it's a matter of making sure we track the allowable filter params in the controller and ensure they match the appropriate scopes in the model. For our device model and device lab controller, that looks like this:


# app/controllers/device_lab_controller.rb (excerpted)
class DeviceLabController < ApplicationController
  def fetch_devices_list
    @devices = Device.filter(filter_params) # see Filtering, sorting and searching below
      .lab_devices
      .with_classifications
      .includes(:user)
    pagingate_list
    serialize_list
  end
  # ...

  def filter_params
    device_lab_params.slice(
      :teams,
      :platforms,
      :available,
      :office,
      :office_only,
      :home,
      :sim,
      :tier1,
      :tier2,
      :search,
    )
  end
end

# app/models/device.rb (excerpted)
class Device < ApplicationRecord
  include Filterable
  # ...

  scope :filter_by_teams, ->(team_ids) {
    # get every device checked out by a user in any of the selected teams
    where(user_id: User.joins(:teams).where(teams: { id: team_ids }).select(:id))
  }
  scope :filter_by_platforms, ->(platform_ids) {
    # etc., one scope for each of the allowable keys in device_lab_controller#filter_params
  }
  # ...
end

We do the same thing with our column sorts. If we want to list our devices by those that have been checked out most recently, we make the request to our index action with a last_checkout=desc query parameter, define a sort_by_last_checkout scope in our Device model, and call Device#sorts(sorting_params) in our fetch_devices_list controller method.

The search scope is a special case here. On the front-end, the JavaScript searching is naive; it simply converts a device record to a string and returns true if it contains a substring matching our query. We can define a faster, more robust search, again with the help of a venerable third-party library. SQL databases are a lot faster at doing searching records than JavaScript is, and the pg_search gem will allow us to harness ActiveRecord to generate the SQL queries for doing those searches easily.

For us, we want a robust search that will match against device record columns as well as columns in associated records such as checkout user's name/email and device join tables like DeviceOs or DevicePlatform. We want it to play nicely with our Filterable concern, so we need to make sure it follows our filter_by_ naming convention. The scope will look like this:


class Device < ApplicationRecord
  include Filterable
  include PgSearch::Model

  # ...

  pg_search_scope :filter_by_search,
    using: {
      tsearch: { prefix: true },
    },
    against: [
      ["serial", "D"],
      ["id", "D"],
      ["notes", "D"],
    ],
    associated_against: {
      device_os: [
        ["value", "C"],
      ],
      device_platform: [
        ["value", "C"],
      ],
      device_name: [
        ["value", "B"],
      ],
      device_type: [
        ["value", "B"],
      ],
      device_make: [
        ["value", "B"],
      ],
      user: [
        ["name", "A"],
        ["email", "A"],
      ],
    }

  # ...
end

pg_search does a ranked ordering of matching records, and we can weight the ranking to make certain attributes or associations higher priority than others when determining sort order. This is what these "A" - "D" strings represent. For example, if someone is searching a specific device serial number, they're likely looking for one specific device and the result will be length one, no need to worry about sorting those results, and if a search term matches a substring in that serial number it's likely just a coincidence, so we can safely sort anything that matches a serial number at lowest priority "D". Likewise, if a user's name matches, we know the user is specifically interested in device's currently checked out by that user, so we'll want to bring those matches to the front, and it gets the highest priority "A".

Just like the pagy gem, pg_search is a venerable and longstanding tool for Rails developers, and can do a lot more than we're doing here. Check the pg_search gem's documentation here.

Results & Conclusion

After making all of these changes, we've now got a snappy, responsive view that gets to first paint quickly, executes performant client-side data fetching, and makes searching and checkout of devices a significantly improved user experience. There are further enhancements we could make, including analyzing our serializers and possibly caching long-lived data in our Redis store then updating/busting cache on device edit or checkout, but that can wait for another day.

Thanks very much for reading and if you have any questions or feedback about this latest dev blog series, feel free to reach out to us at devs@plusqa.com!