In the contemporary world of machine learning algorithms - “data is the new oil”. For the state-of-the-art ML algorithms to work their magic it’s important to lay a strong foundation with access to relevant data. Volumes of crude data are available on the web nowadays, and all we need are the skills to identify and extract meaningful datasets. This talk aims to present the power of the most fundamental aspect of Machine Learning - Dataset Curation, which often does not get its due limelight. It will also walk the audience through the process of constructing good quality datasets as done in formal settings with a simple hands-on Pythonic example. The goal is to institute the importance of data, especially in its worthy format, and the spell it casts on fabricating smart learning algorithms.

Outline

Introduction

  • Popularity of Machine Learning & Applications
  • Significance of honing dataset building skills
  • Importance in Academia: Expanding domains to perform research on, Solve novel problems using ML, Lead research efforts in this domain, etc.
  • Importance in Industry: Availability of lots of raw data, no exact dataset available for training purposes, Proactively identify data to log to solve specific problems, etc.

Finding data source(s)

  • Guided Search based on a problem definition: Identifying essential data signals
  • Unguided Search with no problem definition in mind: Dealing with ambiguity
  • Tips on identifying data sources.

Data Extraction - Hands-On Example (Audience-level & Time-constraint dependent)

  • Live Python example implemented via Jupyter Notebook
  • Use of Python tools: Beautiful soup and Selenium
  • Step-by-step process to plan data extraction
  • Nitty-gritty details about tools and the extraction code itself

Dataset Preparation

  • Cleaning
  • Anonymizing
  • Standardizing
  • Structuring

Conclusion and Takeaways

  • Re-iterating the need for good and reliable datasets: Laying the strong foundation of ML
  • Pointers on why and how to proceed with different data extraction techniques based on the application keeping in mind the pros & cons
  • Some personal anecdotes, recommendations for different use cases as an ML Engineer

The workshop is based off the book authored by us, viz, Sculpting Data for ML: The first act of Machine Learning