Back

Praveen Kumar @PraveenInPublic

Over the weekend, I tested LLMs and Embeddings to solve the hallucination problem.

As an extension to last week's experiment. The combination of both is mind blowing!

  1. Feature extraction using Google Flash
  2. Embedding with OpenAI text embedding small

I have been trying to create an echo chamber for twitter, in a way that is not just curates your "For you" tab with all your interactions, but helps you tune it to your needs.

I used LLM as a feature extractors (extracting topics, sentiment, etc.,)

Embedded all the topics, sentiment, tweets, etc., and ran through a complex logic that will create a weighted average score for each of the tweets from my feed.

Voilà! My feed is exactly as what I need. I can find the top 20 tweets that I can engage with, without the need for reading 200 tweets.

It's currently costing me $0.1/100 tweets crawled, but there's a lot of room to reduce this cost.

02:36 AM · May 27, 2024 268 views
Made with Mevin.ai