Business Situation

Researchers at the life sciences firm depended on multiple scientific databases—including PubMed, PMC, ScienceDirect, MDPI, bioRxiv, and arXiv—to compile research updates and newsletters for their pharmaceutical clients. Each reporting cycle involved searching every platform individually, reviewing overlapping results, validating the relevance of articles, and organizing the findings into structured grids and periodic newsletters. Since the same publication often appeared across several databases, researchers had to manually detect and remove duplicates.

In many cases, search queries returned thousands—or even millions—of results, requiring additional filtering before any meaningful analysis could begin. Database-specific limitations also meant that complex queries had to be broken into smaller, manageable searches. For workflows such as preparing 15- to 30-day research briefs and newsletters, this process could take between 20 and 30 hours per reporting cycle.

To overcome these inefficiencies, the organization envisioned an AI-powered research assistant that could mirror its existing workflow while automating repetitive, time-consuming tasks. The objective was not to replace researchers, but to create a centralized platform where literature aggregation, validation, and report creation could be managed seamlessly in one place—while still allowing researchers full control over review and final outputs.

To turn this vision into reality, the firm partnered with Unthinkable to develop a customized, AI-assisted research platform tailored to its specific needs.

The following capabilities were defined during discovery:

  • Built a unified platform to aggregate research from multiple scientific databases, eliminating fragmented and repetitive searches.

  • Enabled both natural-language and structured queries within a single interface to support diverse researcher workflows.

  • Implemented intelligent deduplication using DOI and publication metadata to automatically detect and remove duplicate entries.

  • Validated and scored articles for relevance before summary creation, while allowing researchers to review and refine selections.

  • Automated repetitive tasks such as filtering, thematic categorization, and the generation of concise research summaries.

  • Provided researcher controls to update metadata, adjust relevance, tag article status (new, updated, preprint, published), and add missing articles via DOI lookup.

  • Managed large result sets through query refinement prompts and AI-assisted optimization to improve relevance and efficiency.