ChatGPT launched with a 1,631 interest score. Perplexity Deep Research pulled 782. Raw numbers are a start, but engagement ratio and category positioning tell you more. Both are below.
Side-by-side comparison of ChatGPT and Perplexity Deep Research based on community engagement data.
Optimizing language models for dialogue
Save hours of time in-depth research and analysis
ChatGPT launched with a 1,631 interest score. Perplexity Deep Research pulled 782. Raw numbers are a start, but engagement ratio and category positioning tell you more. Both are below.
| Category | ChatGPT | Perplexity Deep Research |
|---|---|---|
| Artificial Intelligence | Yes | Yes |
| Bots | Yes | Yes |
| Messaging | Yes | - |
| Search | - | Yes |
ChatGPT leads on raw interest score. ChatGPT leads on engagement ratio. ChatGPT leads on both metrics. That doesn't happen often.
These products share 2 categories: Artificial Intelligence, Bots. Moderate overlap suggests they target related but distinct use cases.
Either the product didn't meet our engagement threshold, or it doesn't share enough category tags with the other product to generate a meaningful comparison. We'd rather show no comparison than a misleading one.
Each product's data reflects its launch period. The comparison shows both products' engagement metrics from when they launched. The build date at the bottom of the page shows when the index was last refreshed.
Not yet. Current comparisons use launch-period data only. Post-launch tracking is on our roadmap.
Generally, yes. Engagement ratio is hard to fake. A product can generate artificial interest, but sustained discussion threads require people who actually used the product and had something to say about it.