Turn articles you follow into daily learning

Follows a thought leader's blog and generates daily takeaways

What you will receive

Learning: Paul Graham

just now

From Paul Graham's latest:

"How to Do Great Work"

What I learned:
Great work comes from working on things you're genuinely curious about. Forcing yourself to work on "important" problems rarely produces breakthroughs.

Actionable insight:
Ask yourself: What would I work on if I had complete freedom? That's probably what you should actually be doing.

Read the essay →

How it works

  1. 1Humrun fetches the latest post from a thought leader's blog
  2. 2AI extracts a learning and makes it actionable
  3. 3You build knowledge from people you admire

You configure

https://paulgraham.com

The thought leader's blog or RSS feed

sk-...

For generating learning summaries

View Python code
import requests
from bs4 import BeautifulSoup
import feedparser
import os

BLOG_URL = os.environ.get("BLOG_URL")
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")

# Try to find RSS
feed = feedparser.parse(BLOG_URL)

if feed.entries:
    post = feed.entries[0]
    title = post.get("title", "Latest Post")
    content = post.get("summary", post.get("description", ""))
    link = post.get("link", BLOG_URL)
else:
    # Scrape the blog
    response = requests.get(BLOG_URL, headers={"User-Agent": "Mozilla/5.0"})
    soup = BeautifulSoup(response.text, "html.parser")

    # Find article links
    articles = soup.select("article a, .post a, h2 a, h3 a")
    if articles:
        article = articles[0]
        link = article.get("href", "")
        if not link.startswith("http"):
            from urllib.parse import urljoin
            link = urljoin(BLOG_URL, link)

        article_resp = requests.get(link, headers={"User-Agent": "Mozilla/5.0"})
        article_soup = BeautifulSoup(article_resp.text, "html.parser")

        title_elem = article_soup.select_one("h1, .title")
        title = title_elem.get_text(strip=True) if title_elem else "Latest Post"
        content = article_soup.get_text(strip=True, separator=" ")[:2500]
    else:
        title = "Latest"
        content = soup.get_text(strip=True, separator=" ")[:2500]
        link = BLOG_URL

# Generate learning summary
prompt = f"""Extract a learning from this article.

Title: {title}
Content: {content}

Format:
- What I learned (1-2 sentences, specific insight)
- Actionable insight (1 sentence, something concrete to try)

Keep it under 60 words total. Be specific to this article."""

response = requests.post(
    "https://api.openai.com/v1/chat/completions",
    headers={"Authorization": f"Bearer {OPENAI_API_KEY}"},
    json={
        "model": "gpt-4o-mini",
        "messages": [{"role": "user", "content": prompt}],
        "max_tokens": 150
    }
)

learning = response.json()["choices"][0]["message"]["content"]

print(f"From: {title}\n")
print(learning)
print(f"\nRead more: {link}")
Suggested schedule: Every day at 9 AMNotifications: After every run
Browse more templates