How topic modeling helped us restructure the blog Abierto al Público and increase our search visibility
When it comes to knowledge management, and in the case of open knowledge in particular, the main challenge is arguably no longer about a lack of information. For those with access to it, the Internet has the potential to connect us with an abundance of knowledge, and increasingly via formats that are free to access. That being said, the ongoing curation, navigation, and synthesis of so much information is one of the current dilemmas related to connecting the most relevant and actionable resources to those who search for them. This goes beyond a pursuit that is merely for aesthetics, promotion, or marketing. As we have observed recently around the world, the so-called “infodemic” urgently demands new ways of supporting people to find the knowledge they are looking for, and that content creators assume an increased responsibility for presenting knowledge and information clearly and comprehensively to their readers.
For these reasons, the IDB is constantly exploring and refining techniques to better connect the Latin American and Caribbean region with quality open knowledge. One very particular example of this, out of many ongoing efforts, includes work our team has done to improve the curation and organization of the content published here at Abierto al Público. In this article, we share some of the learnings about how we have used techniques like topic modeling and SEO to approach content management more efficiently, with the guiding motivation to better support readers in finding the content and learning resources that are most meaningful and practical to them. We hope that you can use these techniques to better organize and share your knowledge, too.
A big milestone, new emerging topics – and a lot of content
Abierto al Publico is first and foremost an IDB resource to share learnings about open knowledge. As we recently celebrate over five years of being online, the blog has published more than 500 articles related to all things open in connection with economic and social development in Latin America and the Caribbean: including open knowledge, open data, open government, open innovation, and more recently, open source technology.
But how to navigate and make sense of it all — especially for our first-time visitors? It was an important time for us to reflect on this question, for various reasons. For one, our coverage related to the open movement continued to evolve beyond the blog’s original categories. We needed a new method for grouping content in a way that would make sense to readers while also offering flexibility to incorporate future content as we continue to grow and follow new lines of conversation. Second, the volume of content discouraged too much manual sorting and rearranging. This is an important consideration because we want to be efficient with our use of time and resources.
With this in mind, we wanted to see how AI and Natural Language Processing could play a role in complementing our strategy and streamlining the otherwise manual task of sorting and categorizing our content in a balanced and consistent manner.
Centering on SEO: Mapping content for the benefit of both people and search engines
Similar to the discussion around good practices for open data, it is also broadly essential for good knowledge and content management that related subject matter can be found and followed by both people and machines.
For this reason, understanding the science behind search engine optimization became an important focal point of our content management strategy. In order to improve how your content appears in search results, search engines like Google constantly scan the web to evaluate the sitemaps of different content providers and try to understand what that content is about, while also making a determination about the quality and relevance of that information to a user’s search. Because of this, we learned about how important it is to maintain consistent categories and tags as well as maintaining relevant links between related content.
When it comes to categories, each article should only belong to one, like the branch of a tree or the hub at the center of a wheel. The number of categories should be roughly balanced in terms of the amount of content in each, and a clear logic should connect the content to its category while also making it distinct it from the other categories.
Learn more here about categorization and topic clusters.
But how many categories would we need to organize so much content? This was our next question. We needed to compare and evaluate our options without too much manual sorting. It is in this context where Topic Modeling becomes highly relevant.
How we used Topic Modeling to identify and create categories of content
Topic modeling is one of several Natural Language Processing techniques within the wider field of artificial intelligence. It can be applied to automatically identify underlying, hidden or latent themes, patterns or groupings within large volumes of text, also known as the “corpus”. As we have learned and shared from previous experiences involving Artificial Intelligence, it is key to remember that success depends largely on the quantity and quality of data that will be used. In the case of Topic Modeling, that same reminder also holds true.
In the case of Abierto al Público, first we gathered the 500+ articles (the corpus) into a single csv file for analysis. This can be achieved using web scraping techniques or otherwise depending on your access to the original file sources and their formats.
The next step was to clean the data to maximize the emphasis on the thematic content. For example, we removed punctuation and words that did not provide much comparative information about the contents of the text such as prepositions, conjunctions, etc. Programming techniques in python can help facilitate this process.
After the data set was cleaned and prepared, we started the iterative process of training the topic modeling algorithm. This meant running the cleaned corpus dataset through an engine. Each iteration consisted of assigning a different arbitrary number of buckets, or topics, in which to classify the terms found in the corpus. The output would provide the groupings of each individual article along with a probability of confidence about how well that content matched the rest of the information in the same grouping.
What tools are available to implement topic modeling?
There are multiple tools that can help you run the Topic Modeling exercise, such as:
- For working in open source, the Gensim library developed for python or the topicmodels package for R.
- Even though they are not open, here are several other services available that let you perform Topic Modeling, even with limited coding experience and a reasonable cost. Two examples of these alternatives are the Amazon Comprehend AWS Service and the LDA module (LatentDirichtletAllocation) included in the Azure Machine Learning Studio.
Interpreting the results
Analyzing the results of a topic modeling exercise can be a very subjective task, so involving subject matter experts in the process is important. It is important to cross-validate the potential patterns that the machine has interpreted with a more human validation. We played with combinations ranging from 3 topics to 10 topics, and carefully compared the results of each output, until we finally homed in toward the balance offered in the results of the 5 topic range, which came to be interpreted as these categories:
Once we reached that point, we then repeated the topic modeling process with the content inside each of the categories to identify more specific sub-themes or clusters. This second round helped us to build out new content that could highlight the content within each category and their related subtopics. From there, we could also make the final validations and adjustments regarding specific tags or incorporating specific keyphrases in relation to SEO.
Applying and implementing the results into our strategy for improved search visibility
This classification structure has helped us expand our content coverage while also maintaining specific points of focus. It has also helped us with common legacy issues, such as avoiding the duplication of existing content by having a clear mapping and awareness at hand, and to continue building constructively on the existing conversations where we have invested before in different topics of conversation. This helps Abierto al Público respond to users’ interests with content that is better structured and connected. It has also contributed to make the content more visible and attractive to search engines.
As a result of this and a few other editorial changes, Abierto al Público has more than doubled the visibility of its content via organic search over the past year.
And you? How do you think topic modeling can benefit knowledge resources for your work, community or government?
Leave a Reply