Static Site Generators (SSG), such as Jekyll, Next, Nuxt, Gatsby, and Hugo provide a tool to create websites that consist of static pages. The common problem with static websites, however, is that they rarely come with a search functionality out of the box.

So how to add search to a static website? In short, generate an index from selected content of the pages of your site, create a search and API to have access to the index and create a user interface so your users can search the index and see search results on the web page.

While adding a search to a static web site should include at least these elements, you can implement the search with various methods.

Let’s look at three different ways you can add search functionality to a static website:

  • building simple search functionality with an example of Simple-Jekyll-Search
  • building fully-featured search with building blocks
  • simple installation of ready-made fully-featured search SaaS

First things first, what are static site generators?

Static site generator creates static websites from source files. The source files include amongst other things configuration files, templates, and content. With the newer Javascript based static site generators, the content can be requested from a database while with the more traditional static site generators the content is saved as static source files.

The static site generator uses the source files to generate static web pages. Pages can be pre-rendered, which means that the pages are compiled to static HTML files (web pages) and uploaded to the server. When the user visits these web pages from a web browser, the delivered pages match the HTML files stored in the server.

The Javascript-based static site generators, however, utilize server-side rendering, which means that the static page source is generated with Javascript as a response for each user request on the server. In other words, when the user loads a page with the browser, the Javascript app on the server creates the static page ‘on the fly’ and returns it to the browser.

According to GitHub, the most prominent static website generators are:

Jekyll

  • Simple blog-aware static site generator
  • Developed with Ruby
  • Templating Liquid (and MarkDown)

Hugo

  • Fast static site generator with an executable client
  • Developed with Go
  • Templating Go-based Amber and Ace

Gatsby

  • Static site generator for React + GraphQL apps
  • Can use multiple sources of data (CMS, SaaS services, etc.)
  • Developed with Javascript
  • Templating React
  • Server-side rendering (SSR)

Next

  • Static site generator for React apps
  • Developed with Javascript
  • Templating React
  • Server-side rendering (SSR)

Nuxt

  • Static site generator for React apps
  • Developed with Javascript
  • Templating Vue.js
  • Server-side rendering (SSR) and pre-rendering

Building simple search functionality with Simple-Jekyll-Search

While you can choose from a wide range of static site generators, we chose Jekyll because it helps us to show the basic principles of adding a search to a static website with a very simple example. As the search solution, we chose Simple Jekyll Search.

The idea of the search is to save content from the web pages to a file that can be referenced by the search. In this case, the file is a template that outputs specified content from the web pages to a JSON file each time the web pages of the site is compiled.

The search has a search widget which consists of the search box (the input field) where the user enters the search terms. It also has an element that displays the search results on the page.

Setting up the search is reasonably easy for those who are familiar with HTML, CSS, Javascript and know how different files are organized in Jekyll. For more detailed information on setting up the search visit the project’s GitHub repository and the tutorial blog post.

As the simple search example for Jekyll shows, there are three elements to the search

  • Building the index from selected content from the web pages
  • The search API that accesses the index when the user enters the search term
  • The search UI where the user enters the search term and the element where the search displays the search results

While the methods for each element may vary, this is what most of the search implementations consist of.

Creating a fully fledged search with building blocks

What if you want to have a search on your website with a fully-featured search engine with capabilities to enhance your search results?

In general, you need the same building blocks as was the case with the simple search for Jekyll.

Also, you need a crawler and a scraper to collect and extract the relevant content from the web pages and way to export the content to an index of a search engine. In addition, you need to develop a search UI which would access the search engine’s API when the user searches and display the results for the search.

You can host the crawler (including the scraper) as well as the search engine on your server. However, sharing the resources of the web server with the crawler and the search engine may slow down the download times of your web pages. You can also purchase cloud-based solutions from Amazon (Amazon Web Services, AWS) and Microsoft (Azure) and host the crawler as well as the search engine in the cloud.

Our premise for choosing a crawler is that it can export the crawled and scraped data to the index of the search engine. The premise for selecting a search engine is that the index can be accessed using a REST API and that the data is returned as JSON. This will make it easier to develop the search UI with Javascript based libraries and frameworks.

By taking these premises into account we recommend Scrapy for crawling and scraping as well as exporting, Elasticsearch as the search engine, any Javascript-based library (Vue.js, React) or framework (Angular) for the search UI.

Crawling and scraping with Scrapy

Scrapy is an open source general purpose web crawling framework developed with Python. You can use Scrapy to crawl and extract structured data. Scrapy supports sending data (items) to Elasticsearch server for indexing.

Before you can use Scrapy you need to install it to your computer. For instructions to install Scrapy visit Scrapy documentation.

Scrapy uses an instance of spider to crawl a website. It visits the web pages of the website and uses selectors to scrape relevant items from the web pages. The items go through an item pipeline and end up to a Feed Exporter which stores the data to a file – by default in JSON format. For instructions to create spider visit the tutorial at Scrapy documentation.

Exporting the data to Elasticsearch index

Elasticsearch is an open source search engine which supports full-text search, analytics amongst other things. Elasticsearch provides REST API for accessing the index which consists of JSON documents. For more information visit Elasticsearch documentation.

Scrapy supports indexing the items to Elasticsearch with ScrapyElasticSearch module. Instead of exporting the items to a file, ScrapyElasticSearch exports the items to Elasticsearch index.For instructions on implementing the ScrapyElasticSearch Feed Exporter visit ScrapyElasticSearch Github pages.

Accessing the index with search UI

Elasticsarch’s index consists of JSON documents that you can access using REST API endpoints. As both technologies, REST API and JSON, are commonly used with Javascript-based web applications, they allow a convenient way of setting up communication between the Elasticsearch server and the search UI on the web page.

The search UI on the web page should have a search box (input field) and an element where the search results are displayed. The functionality for these elements can be implemented with Javascript libraries and frameworks. Angular a very commonly used framework which is developed with Typescript and transpiled to a Javascript web application. React and Vue.js represent the most popular Javascript libraries.

Simple installation of fully-featured AddSearch search-as-a-service

If you are looking for an easier way to add search to your static site, AddSearch is the way to go.

Setting up the search manually requires some technical expertise and time. Sharing computing resources with the search may slow down your website and the actual search. Most likely, you wouldn’t want that.

AddSearch provides a fully-featured instant, visual site search for static websites. Instead of taking multiple steps in setting up a site search, AddSearch takes only one line of code and 5 minutes to make your static website searchable.

What will you find at AddSearch?

  • Very capable crawler and a proprietary search engine built on top of Elasticsearch. AddSearch will take care of the crawling and indexing as well as the search user interface.
  • AddSearch gives your visitors a great user experience with our fast and visual user interface that works on any platform. You can test the search on top of this page. AddSearch hosts these services so the search doesn’t hog resources from your web server, keeping your website lightweight and fast.
  • AddSearch comes fully-featured. You can prioritize relevant search results over others and promote most important pages on top of each search.
  • We will support you at every step and our customer support is fast, friendly and responsive.

Adding the search to your static website is very easy. Submit your site for a free trial and we will crawl and index your site.

Get started now with our 7-day free trial

Lightning fast, accurate and customizable Site Search engine with a Search API. Works on all devices and is easy to install.

Start free trial