Details on My Candidate Statement for the DSF
The Django Software Foundation Board of Directors elections are scheduled for November 2025 and I’ve decided to throw my hat into the ring. My hope specifically, if elected, is to be selected as the Treasurer. I have 4 main objectives over my two year term.
- Getting an Executive Director (ED) to help run the day-to-day operations of the DSF
- Identifying small to midsized companies for sponsorships
- Implementing a formal strategic planning process
- Setting up a fiscal sponsorship program to allow support of initiatives like Django Commons
These are outlined in my candidate statement, but I want to provide a bit more detail why I think they’re important, and some high level details on a plan to get them to completion.
These four goals are interconnected. We need an ED to scale operations, but funding an ED requires increased revenue through corporate sponsorships. Both benefit from having a strategic plan that guides priorities. And fiscal sponsorship potentially creates a new revenue stream while strengthening the ecosystem. This isn't four separate initiatives - it's a coherent plan for sustainable growth.
Getting an Executive Director (ED) to help run the day-to-day operations of the DSF
An ED provides day-to-day operational capacity that volunteer boards simply cannot match. While board members juggle DSF work with full-time jobs, an ED could:
- Call potential corporate sponsors every week, not just when someone has spare time
- Coordinate directly with Django Fellows on priorities and deliverables
- Support DjangoCon organizers across North & South America, Europe, Africa, and Asia with logistics and continuity
- Respond to the steady stream of trademark, licensing, and community inquiries
- Write grant applications to foundations that fund open source
- Prepare board materials so directors can focus on governance, not research
As Jacob Kaplan-Moss says in his 2024 DjangoCon US talk
We’re already at the limit of what a volunteer board can accomplish
Right now we're missing opportunities because volunteer bandwidth is maxed out. We can't pursue major corporate sponsors that need regular touchpoints. We can't support ecosystem projects that need fiscal sponsorship. We can't scale the Fellows program even though there's clearly more work than the current Fellows can handle.
As Treasurer, hiring an ED would be my top priority. Based on comparable nonprofit ED salaries, a part-time ED (20 hours/week) would cost approximately $60,000-$75,000 annually including benefits and overhead. A full-time ED would be $120,000-$150,000.
The DSF's current annual budget is roughly $300,000. Adding even a part-time ED would require increasing revenue by 25-30%. This is exactly why my second priority focuses on corporate sponsorships - we need sustainable revenue growth to support professional operations.
The path forward is phased: board members initiate corporate outreach to fund a part-time ED, who then scales up fundraising efforts to eventually become full-time and bring us toward that $1M budget Jacob outlined. We bootstrap professional operations through volunteer effort, then let the professional accelerate what volunteers started.
Identifying small to midsized companies for sponsorships
In Jacob Kaplan-Moss' 2024 DjangoCon US Talk, he outlines what the DSF could do with a $1M budget. I believe this is achievable, but it requires a systematic approach to corporate sponsorships.
Currently, the DSF focuses primarily on major sponsors. This makes sense - volunteer boards have limited bandwidth, so targeting "whales" is efficient. But we're leaving significant revenue on the table.
Consider the numbers: US Census data shows roughly 80,000-400,000 small to mid-sized tech companies (depending on definitions). Stack Overflow's 2024 survey indicates 46.9% of professional developers use Python, and 38% of Python web developers use Django. Even capturing a small fraction of companies using Django in production at a modest sponsorship tier ($500-$2,500/year) could significantly increase DSF revenue.
The challenge isn't identifying companies - it's having capacity to reach them. This is where an Executive Director becomes critical.
What an Executive Director Would Enable
A part-time Executive Director (ED) could:
- Dedicate 10+ hours weekly to corporate outreach instead of the 1-2 hours volunteer board members can spare
- Maintain a CRM system tracking sponsor relationships, touchpoints, and renewal cycles
- Create targeted outreach campaigns to Django-using companies in specific sectors (healthcare tech, fintech, e-commerce, etc.)
- Develop case studies showing Django's business value to help companies justify sponsorship
- Provide consistent follow-up and relationship management that volunteer boards cannot maintain
My First 90 Days as Treasurer
If elected, here's my concrete plan:
Month 1:
- Audit current sponsors and revenue sources
- Identify 20 target companies (mix of sizes) currently using Django
- Work with current board to draft outreach templates and sponsorship value propositions
Month 2:
- Begin systematic outreach to target companies
- Track response rates and refine approach
- Engage with Django community leaders to identify additional prospects
Month 3:
- Report results to board
- If we've secured commitments for additional $30K-$50K in annual recurring revenue, propose budget to hire part-time ED
- Continue to push forward the ED recruitment process
This is realistic volunteer-level effort (5-8 hours/week) that proves the concept before committing to an ED hire. Once we have an ED, they can scale this 5-10x.
Implementing a formal strategic planning process
The DSF needs a strategic plan - not as a bureaucratic exercise, but as a practical tool for making decisions and measuring progress.
Right now, we operate somewhat reactively. The Fellows program exists because it was created years ago. DjangoCons happen because organizers step up. Corporate sponsorships come in when companies reach out to us. This isn't necessarily bad, but it means we're not proactively asking: What should Django's ecosystem look like in 5 years? How do we get there?
A strategic plan would give us:
Clear priorities: When opportunities arise (a major donor, a new initiative, a partnership proposal), we can evaluate them against stated goals rather than deciding ad-hoc.
Accountability: We can measure whether we're making progress on what we said mattered. Did we grow the Fellows program like we planned? Did sponsorship revenue increase as projected?
Communication: Community members and potential sponsors can understand where the DSF is headed and how they can contribute.
As someone who's been in healthcare management since 2012, I've seen how strategic planning drives organizational effectiveness. The best plans aren't 50-page documents that sit on a shelf - they're living documents that inform quarterly board discussions and annual budget decisions.
For the DSF, I envision a strategic planning process that:
Year 1:
- Conduct stakeholder interviews with Fellows, corporate sponsors, community leaders, and DjangoCon organizers
- Identify 3-5 strategic priorities for the next 3 years (e.g., "double sponsorship revenue," "launch fiscal sponsorship program," "expand geographic diversity of Django community")
- Develop measurable outcomes for each priority
- Share draft plan with community for feedback
Ongoing:
- Review progress quarterly at board meetings
- Publish annual progress reports
- Revise plan every 3 years based on outcomes and changing needs
This connects directly to my other goals: we need a strategic plan to guide ED hiring, fundraising priorities, and fiscal sponsorship criteria. Without it, we're making isolated decisions rather than building toward a coherent vision.
Setting up a fiscal sponsorship program to allow support of initiatives like Django Commons
Django's success isn't just about the framework itself. It's about the ecosystem of packages, tools, and community organizations that have grown around it. Projects like Django Commons, Django Packages, regional Django user groups, and specialized packages serve thousands of developers daily. Yet these projects face a common challenge: they lack the legal and financial infrastructure to accept donations, pay for infrastructure, or compensate maintainers.
A fiscal sponsorship program would allow the DSF to serve as the legal and financial home for vetted Django ecosystem projects. Think of it as the DSF saying: "We'll handle the paperwork, taxes, and compliance; you focus on serving the community."
Who This Helps
- Community maintainers who need to accept donations but shouldn't have to become nonprofit experts
- Django Commons and similar initiatives that need to pay for infrastructure, security audits, or maintainer stipends
- Regional Django organizations that want to organize events or workshops but lack financial infrastructure
- Critical packages in the Django ecosystem that need sustainable funding models
- Corporate sponsors who want to support the broader ecosystem but need a tax-deductible vehicle
Why This Matters
Right now, valuable Django ecosystem projects are essentially flying without a net. If Django Commons needs to accept a $10,000 corporate donation to fund security audits, there's no clear path to doing so. If a critical package needs to pay for CI/CD infrastructure or compensate a maintainer for urgent security fixes, they're stuck. Some projects, such as Djangonaut Space, have tried to solve this individually by creating their own 501(c)(3)s or using platforms like Open Collective, but this fragments the community and creates overhead.
The Python Software Foundation already does this successfully for PyPI, PyLadies, and regional Python conferences. NumFOCUS sponsors dozens of scientific Python projects. There's no reason Django's ecosystem shouldn't have similar support.
For the DSF, this is also about long-term sustainability. A healthy Django depends on a healthy ecosystem. When popular packages go unmaintained or community initiatives shut down due to funding constraints, Django suffers. By providing fiscal sponsorship, we strengthen the entire Django community while also creating a new (modest) revenue stream through administrative fees that can fund DSF operations.
Moving Forward Together
These four initiatives - (1) hiring an Executive Director, (2) growing corporate sponsorships, (3) implementing strategic planning, and (4) launching fiscal sponsorship - represent an ambitious but achievable vision for the DSF's next two years. They're not just ideas; they're a roadmap for taking Django from a volunteer-run project to a professionally-supported ecosystem that can serve millions of developers for decades to come.
If you believe in this vision and think I can help make it happen, I'd be honored to have your vote. You can find more about my background and community involvement in my candidate statement.
Thank you for taking the time to read this, and regardless of the election outcome, I'm committed to supporting Django's continued success.
Deploying n8n on Digital Ocean
This guide shows you how to deploy n8n, a workflow automation tool, on your own VPS. Self-hosting gives you full control over your data, avoids monthly subscription costs, and lets you run unlimited workflows without usage limits.
I'm using Digital Ocean1 for this guide, but these steps work on any VPS provider. You'll need:
- A VPS with Ubuntu 24.04 (minimum 1GB RAM)
- A domain name with DNS access
- Basic familiarity with SSH and command line tools
Create and configure the VPS
Create a droplet with Ubuntu 24.04. Select a plan with at least:
- 1GB RAM
- 25GB Disk
- 1 vCPU
Note the IP address - you'll need it for DNS configuration.
SSH into the server:
ssh root@ipaddress
Update the system:
apt update
apt upgrade -y
Install Docker
Install Docker using the official repository:
# Add Docker's official GPG key
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
# Install Docker and its components
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Configure DNS
Create an A record at your domain registrar pointing your subdomain (e.g., n8n.yourdomain.com) to your droplet's IP address. If you're using Hover, follow their DNS management guide.
Create Docker Compose configuration
Create a docker-compose.yml file on your server. Start with the Caddy service for handling SSL and reverse proxy:
services:
caddy:
image: caddy:latest
ports:
- "80:80"
- "443:443"
restart: always
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
- ./logs:/var/log/caddy
deploy:
resources:
limits:
cpus: '0.5'
memory: 500M
healthcheck:
test: ["CMD", "caddy", "version"]
interval: 30s
timeout: 10s
retries: 3
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
volumes:
caddy_data:
caddy_config:
Create a Caddyfile in the same directory, replacing n8n.mydomain.com with your actual domain:
n8n.mydomain.com {
# Enable compression
encode gzip zstd
# Reverse proxy to n8n
reverse_proxy n8n:5678 {
header_up Host {host}
header_up X-Real-IP {remote}
header_up X-Forwarded-For {remote}
header_up X-Forwarded-Proto {scheme}
header_up X-Forwarded-Host {host}
transport http {
keepalive 30s
keepalive_idle_conns 10
}
flush_interval -1
}
# Security headers (relaxed CSP for n8n's dynamic interface)
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "SAMEORIGIN"
Referrer-Policy "strict-origin-when-cross-origin"
Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'; img-src 'self' data: blob:; connect-src 'self' wss: ws:; frame-src 'self'; worker-src 'self' blob:;"
-Server
}
# Enable logging
log {
output file /var/log/caddy/n8n-access.log {
roll_size 10MB
roll_keep 5
}
format json
}
# Enable TLS with reasonable settings
tls {
protocols tls1.2 tls1.3
}
}
Add n8n to Docker Compose
Add the n8n service under services: in your docker-compose.yml file. Replace n8n.mydomain.com with your domain in the environment variables:
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: always
environment:
- N8N_HOST=n8n.mydomain.com
- N8N_PORT=5678
- WEBHOOK_URL=https://n8n.mydomain.com/
- GENERIC_TIMEZONE=UTC
ports:
- "5678:5678"
volumes:
- n8n_data:/home/node/.n8n
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5678/healthz"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
Add n8n_data: to the volumes: section in your docker-compose.yml file:
volumes:
caddy_data:
caddy_config:
n8n_data: # new line
Your final docker-compose.yml file will look like this:
services:
caddy:
image: caddy:latest
ports:
- "80:80"
- "443:443"
restart: always
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
- ./logs:/var/log/caddy
deploy:
resources:
limits:
cpus: '0.5'
memory: 500M
healthcheck:
test: ["CMD", "caddy", "version"]
interval: 30s
timeout: 10s
retries: 3
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: always
environment:
- N8N_HOST=n8n.mydomain.com
- N8N_PORT=5678
- WEBHOOK_URL=https://n8n.mydomain.com/
- GENERIC_TIMEZONE=UTC
ports:
- "5678:5678"
volumes:
- n8n_data:/home/node/.n8n
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5678/healthz"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
volumes:
caddy_data:
caddy_config:
n8n_data:
Start the containers
Run the containers in detached mode:
docker compose up -d
Complete the setup
Navigate to https://n8n.yourdomain.com in your browser. Follow the setup wizard to create your admin account. Once complete, you can start building workflows.
- Referral Link ↩︎
Why We Need to Stop Fighting About AI Tools and Start Teaching Them
In mid-June, Hynek tooted on Mastodon the following toot:
Watching the frustratingly fruitless fights over the USEFULNESS of LLM-based coding helpers, I've come down to 3 points that explain why ppl seem to live in different realities:
Most programmers:
1) Write inconsequential remixes of trivial code that has been written many times before.
2) Lack the taste for good design & suck at code review in general (yours truly included).
3) Lack the judgement to differentiate between 1) & FOSS repos of nontrivial code, leading to PR slop avalanche.
1/3
So, if you're writing novel code & not another CRUD app or API wrapper, all you can see is LLMs fall on their faces.
Same goes for bigger applications if you care about design. Deceivingly, if you lack 2), you won't notice that an architecture is crap b/c it doesn't look worse than your usual stuff.
That means that the era of six figures for CRUD apps is coming to an end, but it also means that Claude Code et al can be very useful for certain tasks. Not every task involves splitting atoms. 2/3
2/3
There's also a bit of a corollary here. Given that LLMs are stochastic parrots, the inputs determine the outputs.
And, without naming names, certain communities are more… rigorous… at software design than others.
It follows that the quality of LLM-generated code will inevitably become a decision factor for choosing frameworks and languages and I'm not sure if I'm ready for that.
3/3
I've been having a lot of success with using Claude Code recently so I've been thinking about this toot a lot lately. Simon Willison talks a lot about the things that he's been able to do because he just asks OpenAI's ChatGPT while walking his dog. He's asking a coding agent to help him with ideas he has in languages with which he may not be familiar. However, he's a good enough programmer that he can spot anti-patterns that are being written by the agent.
For me, it comes down to the helpfulness of these agentic coding tools; they can help me write boiler plate code more quickly. What it's really coming down to, for me, is that when something is trivially easy to implement, like another CRUD app or an API wrapper, those problems are solved. We don't need to keep solving them in ways that don't really help. What we need to do in order to be better programmers is figure out how to solve problems most effectively. And if that's creating a CRUD app or an API wrapper or whatever, then yeah, you're not solving any huge problem there. But if you're looking to solve something in a very unique or novel way, agentic coding tools aren't going to help you as much.
I don't need to know how the internal combustion engine of my car works. I do need to know that when the check engine light comes on, I need to take it to a mechanic. And then that mechanic is going to use some device that lets them know what is wrong with the car and what needs to be done to fix it. This seems very analogous to the coding agents that we're seeing now. We don't have to keep trying to solve those problems with well-known solutions. We can and we should rely on the knowledge that is available to us and use that knowledge to solve these problems quickly. This allows us to focus on trying to solve new problems that no one has ever seen.
This doesn't mean we can skip learning the fundamentals. Like blocking and tackling in football, if you can't handle the basic building blocks of programming, you're not going to succeed with complex projects. That foundational understanding remains essential.
The real value of large language models and coding agents lies in how they can accelerate that learning process. Being able to ask an LLM about how a specific GitHub action works, or why you'd want to use a particular pattern, creates opportunities to understand concepts more quickly. These tools won't solve novel problems for you—that's still the core work of being a software developer. But they can eliminate the repetitive research and boilerplate implementation that used to consume so much of our time, freeing us to focus on the problems that actually require human creativity and problem-solving skills.
How many software developers write in assembly anymore? Some of us maybe, but really what it comes down to is that we don't have to. We've abstracted away a lot of that particular knowledge set to a point where we don't need it anymore. We can write code in higher-level languages to help us get to solutions more quickly. If that's the case, why shouldn't we use LLMs to help us get to solutions even more quickly?
I've noticed a tendency to view LLM-assisted coding as somehow less legitimate, but this misses the opportunity to help developers integrate these tools thoughtfully into their workflow. Instead of questioning the validity of using these tools, we should be focusing on how we can help people learn to use them effectively.
In the same way that we helped people to learn how to use Google, we should help them to use large language models. Back in the early 2000s when Google was just starting to become a thing, knowing how to effectively use it to exclude specific terms, search for exact phrases using quotation marks, that wasn't always known by everybody. But the people who knew how to do that were able to find things more effectively.
I see a parallel here. Instead of dismissing people who use these tools, we should be asking more constructive questions: How do we help them become more effective with LLMs? How do we help them use these tools to actually learn and grow as developers?
Understanding the limitations of large language models is crucial to using them well, but right now we're missing that opportunity by focusing on whether people should use them at all rather than how they can use them better.
We need to take a step back and re-evaluate how we use LLMs and how we encourage others to use them. The goal is getting to a point where we understand that LLMs are one more tool in our developer toolkit, regardless of whether we're working on open-source projects or commercial software. We don't need to avoid these tools. We just need to learn how to use them more effectively, and we need to do this quickly.
Updated TIL
While browsing Mastodon the other day I came across a toot that linked to this Cursed Knowledge page. I thought it was a great page, but it occurred to me that it could be helpful to apply the same sort of styling to my TIL Repo.
My TIL (Today I Learned) repository was inspired by Simon Wilison's and is basically a rip off completely inspired by him.
Now, I'm not a front end guy, and while I can do some pretty basic stuff with GitHub Actions, I knew that what I wanted to do far outstripped my current capabilities. Enter Claude!
I started with this prompt
You are an expert in GitHub Actions, and GitHub. Additionally, you are an expert in HTML, CSS, and JavaScript. The solutions provided can not leverage JavaScript frameworks.
I then made this request
I currently have a Today I learned (TIL) repository on GitHub located here
https://github.com/ryancheley/til
I came across the idea of documenting cursed information here
https://immich.app/cursed-knowledge/
I would like to implement my TIL in a timeline the same way the cursed knowledge page was done.
How can I do that?
It output this
name: Build TIL Timeline
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
schedule:
# Rebuild daily at 6 AM UTC
- cron: '0 6 * * *'
permissions:
contents: read
pages: write
id-token: write
jobs:
build-timeline:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Generate timeline data
run: |
node << 'EOF'
const fs = require('fs');
const path = require('path');
// Function to extract metadata from markdown files
function extractTilData(filePath, content) {
const relativePath = path.relative('.', filePath);
const fileName = path.basename(filePath, '.md');
// Extract title from first heading or use filename
const titleMatch = content.match(/^#\s+(.+)$/m);
const title = titleMatch ? titleMatch[1] : fileName.replace(/[-_]/g, ' ');
// Extract first paragraph as description
const lines = content.split('\n').filter(line => line.trim());
let description = '';
for (const line of lines) {
if (!line.startsWith('#') && line.trim() && !line.startsWith('```')) {
description = line.trim();
break;
}
}
// Get file stats for date
const stats = fs.statSync(filePath);
const date = stats.mtime; // Use modification time
// Extract category from directory structure
const pathParts = relativePath.split('/');
const category = pathParts.length > 1 ? pathParts[0] : 'General';
return {
title,
description: description || 'No description available',
date: date.toISOString().split('T')[0], // YYYY-MM-DD format
url: `https://github.com/ryancheley/til/blob/main/${relativePath}`,
category,
fileName: relativePath
};
}
// Recursively find all markdown files
function findMarkdownFiles(dir) {
const files = [];
const entries = fs.readdirSync(dir, { withFileTypes: true });
for (const entry of entries) {
const fullPath = path.join(dir, entry.name);
if (entry.isDirectory() && !entry.name.startsWith('.')) {
files.push(...findMarkdownFiles(fullPath));
} else if (entry.isFile() && entry.name.endsWith('.md') && entry.name !== 'README.md') {
files.push(fullPath);
}
}
return files;
}
// Generate timeline data
const markdownFiles = findMarkdownFiles('.');
const tilEntries = [];
for (const filePath of markdownFiles) {
try {
const content = fs.readFileSync(filePath, 'utf8');
const tilData = extractTilData(filePath, content);
tilEntries.push(tilData);
} catch (error) {
console.error(`Error processing ${filePath}:`, error.message);
}
}
// Sort by date (newest first)
tilEntries.sort((a, b) => new Date(b.date) - new Date(a.date));
// Create output directory
if (!fs.existsSync('docs')) {
fs.mkdirSync('docs');
}
// Write timeline data
fs.writeFileSync('docs/timeline-data.json', JSON.stringify(tilEntries, null, 2));
console.log(`Generated timeline with ${tilEntries.length} entries`);
EOF
- name: Generate timeline HTML
run: |
mkdir -p docs
cat > docs/index.html << 'EOF'
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Ryan's Today I Learned Timeline</title>
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', system-ui, sans-serif;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
min-height: 100vh;
color: #333;
}
.container {
max-width: 1200px;
margin: 0 auto;
padding: 2rem;
}
.header {
text-align: center;
margin-bottom: 3rem;
color: white;
}
.header h1 {
font-size: 3rem;
margin-bottom: 1rem;
text-shadow: 2px 2px 4px rgba(0,0,0,0.3);
}
.header p {
font-size: 1.2rem;
opacity: 0.9;
max-width: 600px;
margin: 0 auto;
}
.timeline {
position: relative;
margin-top: 2rem;
}
.timeline::before {
content: '';
position: absolute;
left: 2rem;
top: 0;
bottom: 0;
width: 2px;
background: linear-gradient(to bottom, #4CAF50, #2196F3, #FF9800, #E91E63);
}
.timeline-item {
position: relative;
margin-bottom: 2rem;
margin-left: 4rem;
background: white;
border-radius: 12px;
padding: 1.5rem;
box-shadow: 0 8px 25px rgba(0,0,0,0.1);
transition: transform 0.3s ease, box-shadow 0.3s ease;
}
.timeline-item:hover {
transform: translateY(-5px);
box-shadow: 0 15px 35px rgba(0,0,0,0.15);
}
.timeline-item::before {
content: '';
position: absolute;
left: -3rem;
top: 2rem;
width: 16px;
height: 16px;
background: #4CAF50;
border: 3px solid white;
border-radius: 50%;
box-shadow: 0 0 0 3px rgba(76, 175, 80, 0.3);
}
.timeline-item:nth-child(4n+2)::before { background: #2196F3; box-shadow: 0 0 0 3px rgba(33, 150, 243, 0.3); }
.timeline-item:nth-child(4n+3)::before { background: #FF9800; box-shadow: 0 0 0 3px rgba(255, 152, 0, 0.3); }
.timeline-item:nth-child(4n+4)::before { background: #E91E63; box-shadow: 0 0 0 3px rgba(233, 30, 99, 0.3); }
.timeline-header {
display: flex;
justify-content: space-between;
align-items: flex-start;
margin-bottom: 1rem;
flex-wrap: wrap;
gap: 1rem;
}
.timeline-title {
font-size: 1.4rem;
font-weight: 600;
color: #2c3e50;
text-decoration: none;
flex-grow: 1;
transition: color 0.3s ease;
}
.timeline-title:hover {
color: #3498db;
}
.timeline-meta {
display: flex;
gap: 1rem;
align-items: center;
flex-shrink: 0;
}
.timeline-date {
background: linear-gradient(135deg, #667eea, #764ba2);
color: white;
padding: 0.5rem 1rem;
border-radius: 20px;
font-size: 0.9rem;
font-weight: 500;
}
.timeline-category {
background: #f8f9fa;
color: #6c757d;
padding: 0.4rem 0.8rem;
border-radius: 15px;
font-size: 0.8rem;
font-weight: 500;
border: 1px solid #e9ecef;
}
.timeline-description {
color: #5a6c7d;
line-height: 1.6;
font-size: 1rem;
}
.loading {
text-align: center;
padding: 3rem;
color: white;
font-size: 1.2rem;
}
.error {
background: #f8d7da;
color: #721c24;
padding: 1rem;
border-radius: 8px;
border: 1px solid #f5c6cb;
margin-bottom: 2rem;
}
.stats {
background: rgba(255,255,255,0.95);
border-radius: 12px;
padding: 1.5rem;
margin-bottom: 2rem;
backdrop-filter: blur(10px);
border: 1px solid rgba(255,255,255,0.2);
}
.stats-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 1rem;
text-align: center;
}
.stat-item {
padding: 1rem;
}
.stat-number {
font-size: 2rem;
font-weight: bold;
color: #667eea;
display: block;
}
.stat-label {
color: #666;
font-size: 0.9rem;
margin-top: 0.5rem;
}
@media (max-width: 768px) {
.container {
padding: 1rem;
}
.header h1 {
font-size: 2rem;
}
.timeline::before {
left: 1rem;
}
.timeline-item {
margin-left: 2rem;
padding: 1rem;
}
.timeline-item::before {
left: -2rem;
}
.timeline-header {
flex-direction: column;
align-items: stretch;
}
.timeline-meta {
justify-content: space-between;
}
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Today I Learned</h1>
<p>A timeline of discoveries, learnings, and insights from my development journey</p>
</div>
<div class="stats" id="stats" style="display: none;">
<div class="stats-grid">
<div class="stat-item">
<span class="stat-number" id="totalEntries">0</span>
<div class="stat-label">Total Entries</div>
</div>
<div class="stat-item">
<span class="stat-number" id="categoriesCount">0</span>
<div class="stat-label">Categories</div>
</div>
<div class="stat-item">
<span class="stat-number" id="latestEntry">-</span>
<div class="stat-label">Latest Entry</div>
</div>
</div>
</div>
<div class="loading" id="loading">Loading timeline...</div>
<div class="error" id="error" style="display: none;"></div>
<div class="timeline" id="timeline"></div>
</div>
<script>
async function loadTimeline() {
try {
const response = await fetch('timeline-data.json');
if (!response.ok) {
throw new Error('Failed to load timeline data');
}
const entries = await response.json();
displayTimeline(entries);
displayStats(entries);
} catch (error) {
console.error('Error loading timeline:', error);
document.getElementById('loading').style.display = 'none';
document.getElementById('error').style.display = 'block';
document.getElementById('error').textContent = 'Failed to load timeline. Please try again later.';
}
}
function displayStats(entries) {
const categories = [...new Set(entries.map(entry => entry.category))];
const latest = entries.length > 0 ? new Date(entries[0].date).toLocaleDateString() : '-';
document.getElementById('totalEntries').textContent = entries.length;
document.getElementById('categoriesCount').textContent = categories.length;
document.getElementById('latestEntry').textContent = latest;
document.getElementById('stats').style.display = 'block';
}
function displayTimeline(entries) {
const timelineContainer = document.getElementById('timeline');
const loading = document.getElementById('loading');
loading.style.display = 'none';
if (entries.length === 0) {
timelineContainer.innerHTML = '<p style="text-align: center; color: white;">No entries found.</p>';
return;
}
const timelineHTML = entries.map(entry => {
const date = new Date(entry.date).toLocaleDateString('en-US', {
year: 'numeric',
month: 'short',
day: 'numeric'
});
return `
<div class="timeline-item">
<div class="timeline-header">
<a href="${entry.url}" class="timeline-title" target="_blank" rel="noopener noreferrer">
${entry.title}
</a>
<div class="timeline-meta">
<span class="timeline-category">${entry.category}</span>
<span class="timeline-date">${date}</span>
</div>
</div>
<div class="timeline-description">
${entry.description}
</div>
</div>
`;
}).join('');
timelineContainer.innerHTML = timelineHTML;
}
// Load timeline on page load
loadTimeline();
</script>
</body>
</html>
EOF
- name: Setup Pages
uses: actions/configure-pages@v4
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: 'docs'
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build-timeline
if: github.ref == 'refs/heads/main'
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
I added it to my TIL repo and tested to see how it looked. It was a great start!
I then followed up with
this is a great start! however, it looks like the are a few issues: 1. All of the dates on the timeline are the published date and not the date of the actual learning. There is a SQLite database that stores this kind of meta data 2. The order of the items matches the order they are displayed on the Read Me, which is by category and not by date (see point 1) 3. the colors for each TIL seem to just rotate in order and aren't associated with any specific category of TIL
Claude went to work and generated a new GitHub Action for me
name: Build TIL Timeline
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
schedule:
# Rebuild daily at 6 AM UTC
- cron: '0 6 * * *'
permissions:
contents: read
pages: write
id-token: write
jobs:
build-timeline:
runs-on: ubuntu-latest
steps:
- name: Check out repo
uses: actions/checkout@v4
with:
fetch-depth: 0 # Need full history for git analysis
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.12
- uses: actions/cache@v4
name: Configure pip caching
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Build database
run: python build_database.py
- name: Generate timeline data from SQLite
run: |
python << 'EOF'
import sqlite3
import json
import os
from pathlib import Path
# Connect to the SQLite database
db_path = Path("tils.db")
if not db_path.exists():
print("Database not found!")
exit(1)
conn = sqlite3.connect(db_path)
conn.row_factory = sqlite3.Row # Enable dict-like access to rows
# Query all TIL entries, ordered by created date (actual learning date)
cursor = conn.execute("""
SELECT
path,
slug,
topic,
title,
url,
body,
created,
created_utc,
updated,
updated_utc
FROM til
ORDER BY created_utc DESC
""")
entries = []
for row in cursor:
# Extract first paragraph as description, excluding headers and code blocks
lines = row['body'].split('\n')
description = ''
for line in lines:
line = line.strip()
if line and not line.startswith('#') and not line.startswith('```') and line != '---':
description = line
break
# Clean up the description
if len(description) > 200:
description = description[:200] + '...'
# Extract date from created_utc (YYYY-MM-DD format)
created_date = row['created_utc'].split('T')[0] if row['created_utc'] else row['created'].split('T')[0]
entry = {
'title': row['title'],
'description': description or 'No description available',
'date': created_date,
'url': row['url'],
'category': row['topic'],
'fileName': row['path'].replace('_', '/'), # Convert back from slug format
'slug': row['slug']
}
entries.append(entry)
conn.close()
# Create output directory
os.makedirs('docs', exist_ok=True)
# Write timeline data
with open('docs/timeline-data.json', 'w') as f:
json.dump(entries, f, indent=2)
print(f"Generated timeline with {len(entries)} entries")
# Generate category statistics for consistent coloring
categories = {}
for entry in entries:
cat = entry['category']
if cat not in categories:
categories[cat] = len([e for e in entries if e['category'] == cat])
# Sort categories by count (most popular first) for consistent color assignment
sorted_categories = sorted(categories.items(), key=lambda x: x[1], reverse=True)
category_colors = {}
# Define a set of distinct colors for categories
color_palette = [
'#4CAF50', # Green
'#2196F3', # Blue
'#FF9800', # Orange
'#E91E63', # Pink
'#9C27B0', # Purple
'#00BCD4', # Cyan
'#FF5722', # Deep Orange
'#795548', # Brown
'#607D8B', # Blue Grey
'#FFC107', # Amber
'#8BC34A', # Light Green
'#3F51B5', # Indigo
'#F44336', # Red
'#009688', # Teal
'#CDDC39', # Lime
]
for i, (category, count) in enumerate(sorted_categories):
category_colors[category] = color_palette[i % len(color_palette)]
# Write category color mapping
with open('docs/category-colors.json', 'w') as f:
json.dump(category_colors, f, indent=2)
print(f"Generated color mapping for {len(category_colors)} categories")
EOF
- name: Generate timeline HTML
run: |
cat > docs/index.html << 'EOF'
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Ryan's Today I Learned Timeline</title>
<meta name="description" content="A chronological timeline of learning discoveries from software development, featuring insights on Python, Django, SQL, and more.">
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', system-ui, sans-serif;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
min-height: 100vh;
color: #333;
}
.container {
max-width: 1200px;
margin: 0 auto;
padding: 2rem;
}
.header {
text-align: center;
margin-bottom: 3rem;
color: white;
}
.header h1 {
font-size: 3rem;
margin-bottom: 1rem;
text-shadow: 2px 2px 4px rgba(0,0,0,0.3);
}
.header p {
font-size: 1.2rem;
opacity: 0.9;
max-width: 600px;
margin: 0 auto;
}
.filters {
background: rgba(255,255,255,0.95);
border-radius: 12px;
padding: 1.5rem;
margin-bottom: 2rem;
backdrop-filter: blur(10px);
border: 1px solid rgba(255,255,255,0.2);
}
.filter-group {
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
align-items: center;
}
.filter-label {
font-weight: 600;
margin-right: 1rem;
color: #666;
}
.category-filter {
padding: 0.4rem 0.8rem;
border-radius: 20px;
border: 2px solid transparent;
background: #f8f9fa;
color: #666;
cursor: pointer;
transition: all 0.3s ease;
font-size: 0.9rem;
user-select: none;
}
.category-filter:hover {
transform: translateY(-2px);
box-shadow: 0 4px 8px rgba(0,0,0,0.1);
}
.category-filter.active {
color: white;
border-color: currentColor;
font-weight: 600;
}
.timeline {
position: relative;
margin-top: 2rem;
}
.timeline::before {
content: '';
position: absolute;
left: 2rem;
top: 0;
bottom: 0;
width: 2px;
background: linear-gradient(to bottom, #4CAF50, #2196F3, #FF9800, #E91E63);
}
.timeline-item {
position: relative;
margin-bottom: 2rem;
margin-left: 4rem;
background: white;
border-radius: 12px;
padding: 1.5rem;
box-shadow: 0 8px 25px rgba(0,0,0,0.1);
transition: all 0.3s ease;
opacity: 1;
}
.timeline-item.hidden {
display: none;
}
.timeline-item:hover {
transform: translateY(-5px);
box-shadow: 0 15px 35px rgba(0,0,0,0.15);
}
.timeline-item::before {
content: '';
position: absolute;
left: -3rem;
top: 2rem;
width: 16px;
height: 16px;
background: var(--category-color, #4CAF50);
border: 3px solid white;
border-radius: 50%;
box-shadow: 0 0 0 3px rgba(76, 175, 80, 0.3);
}
.timeline-header {
display: flex;
justify-content: space-between;
align-items: flex-start;
margin-bottom: 1rem;
flex-wrap: wrap;
gap: 1rem;
}
.timeline-title {
font-size: 1.4rem;
font-weight: 600;
color: #2c3e50;
text-decoration: none;
flex-grow: 1;
transition: color 0.3s ease;
}
.timeline-title:hover {
color: #3498db;
}
.timeline-meta {
display: flex;
gap: 1rem;
align-items: center;
flex-shrink: 0;
}
.timeline-date {
background: linear-gradient(135deg, #667eea, #764ba2);
color: white;
padding: 0.5rem 1rem;
border-radius: 20px;
font-size: 0.9rem;
font-weight: 500;
}
.timeline-category {
background: var(--category-color, #f8f9fa);
color: white;
padding: 0.4rem 0.8rem;
border-radius: 15px;
font-size: 0.8rem;
font-weight: 500;
border: 1px solid rgba(255,255,255,0.2);
}
.timeline-description {
color: #5a6c7d;
line-height: 1.6;
font-size: 1rem;
}
.loading {
text-align: center;
padding: 3rem;
color: white;
font-size: 1.2rem;
}
.error {
background: #f8d7da;
color: #721c24;
padding: 1rem;
border-radius: 8px;
border: 1px solid #f5c6cb;
margin-bottom: 2rem;
}
.stats {
background: rgba(255,255,255,0.95);
border-radius: 12px;
padding: 1.5rem;
margin-bottom: 2rem;
backdrop-filter: blur(10px);
border: 1px solid rgba(255,255,255,0.2);
}
.stats-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 1rem;
text-align: center;
}
.stat-item {
padding: 1rem;
}
.stat-number {
font-size: 2rem;
font-weight: bold;
color: #667eea;
display: block;
}
.stat-label {
color: #666;
font-size: 0.9rem;
margin-top: 0.5rem;
}
@media (max-width: 768px) {
.container {
padding: 1rem;
}
.header h1 {
font-size: 2rem;
}
.timeline::before {
left: 1rem;
}
.timeline-item {
margin-left: 2rem;
padding: 1rem;
}
.timeline-item::before {
left: -2rem;
}
.timeline-header {
flex-direction: column;
align-items: stretch;
}
.timeline-meta {
justify-content: space-between;
}
.filter-group {
flex-direction: column;
align-items: stretch;
gap: 1rem;
}
.category-filter {
text-align: center;
}
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Today I Learned</h1>
<p>A chronological timeline of discoveries, learnings, and insights from my development journey</p>
</div>
<div class="stats" id="stats" style="display: none;">
<div class="stats-grid">
<div class="stat-item">
<span class="stat-number" id="totalEntries">0</span>
<div class="stat-label">Total Entries</div>
</div>
<div class="stat-item">
<span class="stat-number" id="categoriesCount">0</span>
<div class="stat-label">Categories</div>
</div>
<div class="stat-item">
<span class="stat-number" id="latestEntry">-</span>
<div class="stat-label">Latest Entry</div>
</div>
<div class="stat-item">
<span class="stat-number" id="filteredCount">0</span>
<div class="stat-label">Showing</div>
</div>
</div>
</div>
<div class="filters" id="filters" style="display: none;">
<div class="filter-group">
<span class="filter-label">Filter by category:</span>
<div id="categoryFilters"></div>
</div>
</div>
<div class="loading" id="loading">Loading timeline...</div>
<div class="error" id="error" style="display: none;"></div>
<div class="timeline" id="timeline"></div>
</div>
<script>
let allEntries = [];
let categoryColors = {};
let activeCategory = null;
async function loadTimeline() {
try {
// Load timeline data and category colors
const [entriesResponse, colorsResponse] = await Promise.all([
fetch('timeline-data.json'),
fetch('category-colors.json')
]);
if (!entriesResponse.ok || !colorsResponse.ok) {
throw new Error('Failed to load timeline data');
}
allEntries = await entriesResponse.json();
categoryColors = await colorsResponse.json();
displayTimeline(allEntries);
displayStats(allEntries);
createCategoryFilters();
} catch (error) {
console.error('Error loading timeline:', error);
document.getElementById('loading').style.display = 'none';
document.getElementById('error').style.display = 'block';
document.getElementById('error').textContent = 'Failed to load timeline. Please try again later.';
}
}
function createCategoryFilters() {
const categories = [...new Set(allEntries.map(entry => entry.category))];
const filtersContainer = document.getElementById('categoryFilters');
// Add "All" filter
const allFilter = document.createElement('span');
allFilter.className = 'category-filter active';
allFilter.textContent = 'All';
allFilter.onclick = () => filterByCategory(null);
filtersContainer.appendChild(allFilter);
// Add category filters
categories.sort().forEach(category => {
const filter = document.createElement('span');
filter.className = 'category-filter';
filter.textContent = category;
filter.style.setProperty('--category-color', categoryColors[category] || '#666');
filter.onclick = () => filterByCategory(category);
filtersContainer.appendChild(filter);
});
document.getElementById('filters').style.display = 'block';
}
function filterByCategory(category) {
activeCategory = category;
// Update filter button states
document.querySelectorAll('.category-filter').forEach(filter => {
filter.classList.remove('active');
if ((category === null && filter.textContent === 'All') ||
filter.textContent === category) {
filter.classList.add('active');
if (category !== null) {
filter.style.background = categoryColors[category];
}
}
});
// Filter timeline items
const filteredEntries = category ?
allEntries.filter(entry => entry.category === category) :
allEntries;
displayTimeline(filteredEntries);
updateFilteredCount(filteredEntries.length);
}
function updateFilteredCount(count) {
document.getElementById('filteredCount').textContent = count;
}
function displayStats(entries) {
const categories = [...new Set(entries.map(entry => entry.category))];
const latest = entries.length > 0 ? new Date(entries[0].date).toLocaleDateString() : '-';
document.getElementById('totalEntries').textContent = entries.length;
document.getElementById('categoriesCount').textContent = categories.length;
document.getElementById('latestEntry').textContent = latest;
document.getElementById('filteredCount').textContent = entries.length;
document.getElementById('stats').style.display = 'block';
}
function displayTimeline(entries) {
const timelineContainer = document.getElementById('timeline');
const loading = document.getElementById('loading');
loading.style.display = 'none';
if (entries.length === 0) {
timelineContainer.innerHTML = '<p style="text-align: center; color: white;">No entries found.</p>';
return;
}
const timelineHTML = entries.map(entry => {
const date = new Date(entry.date).toLocaleDateString('en-US', {
year: 'numeric',
month: 'short',
day: 'numeric'
});
const categoryColor = categoryColors[entry.category] || '#666';
return `
<div class="timeline-item" style="--category-color: ${categoryColor}">
<div class="timeline-header">
<a href="${entry.url}" class="timeline-title" target="_blank" rel="noopener noreferrer">
${entry.title}
</a>
<div class="timeline-meta">
<span class="timeline-category" style="--category-color: ${categoryColor}; background: ${categoryColor};">
${entry.category}
</span>
<span class="timeline-date">${date}</span>
</div>
</div>
<div class="timeline-description">
${entry.description}
</div>
</div>
`;
}).join('');
timelineContainer.innerHTML = timelineHTML;
}
// Load timeline on page load
loadTimeline();
</script>
</body>
</html>
EOF
- name: Setup Pages
uses: actions/configure-pages@v4
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: 'docs'
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build-timeline
if: github.ref == 'refs/heads/main'
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
Another huge step forward! However, there was one last change that was needed
oh wow! much better! The last issue I'm seeing is that because I have so many categories, the Filter by category needs a way to wrap based on the page width. Right now it's on a single line, but would likely benefit from a css style that will wrap the lines
And I got this
name: Build TIL Timeline
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
schedule:
# Rebuild daily at 6 AM UTC
- cron: '0 6 * * *'
permissions:
contents: read
pages: write
id-token: write
jobs:
build-timeline:
runs-on: ubuntu-latest
steps:
- name: Check out repo
uses: actions/checkout@v4
with:
fetch-depth: 0 # Need full history for git analysis
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.12
- uses: actions/cache@v4
name: Configure pip caching
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Build database
run: python build_database.py
- name: Generate timeline data from SQLite
run: |
python << 'EOF'
import sqlite3
import json
import os
from pathlib import Path
# Connect to the SQLite database
db_path = Path("tils.db")
if not db_path.exists():
print("Database not found!")
exit(1)
conn = sqlite3.connect(db_path)
conn.row_factory = sqlite3.Row # Enable dict-like access to rows
# Query all TIL entries, ordered by created date (actual learning date)
cursor = conn.execute("""
SELECT
path,
slug,
topic,
title,
url,
body,
created,
created_utc,
updated,
updated_utc
FROM til
ORDER BY created_utc DESC
""")
entries = []
for row in cursor:
# Extract first paragraph as description, excluding headers and code blocks
lines = row['body'].split('\n')
description = ''
for line in lines:
line = line.strip()
if line and not line.startswith('#') and not line.startswith('```') and line != '---':
description = line
break
# Clean up the description
if len(description) > 200:
description = description[:200] + '...'
# Extract date from created_utc (YYYY-MM-DD format)
created_date = row['created_utc'].split('T')[0] if row['created_utc'] else row['created'].split('T')[0]
entry = {
'title': row['title'],
'description': description or 'No description available',
'date': created_date,
'url': row['url'],
'category': row['topic'],
'fileName': row['path'].replace('_', '/'), # Convert back from slug format
'slug': row['slug']
}
entries.append(entry)
conn.close()
# Create output directory
os.makedirs('docs', exist_ok=True)
# Write timeline data
with open('docs/timeline-data.json', 'w') as f:
json.dump(entries, f, indent=2)
print(f"Generated timeline with {len(entries)} entries")
# Generate category statistics for consistent coloring
categories = {}
for entry in entries:
cat = entry['category']
if cat not in categories:
categories[cat] = len([e for e in entries if e['category'] == cat])
# Sort categories by count (most popular first) for consistent color assignment
sorted_categories = sorted(categories.items(), key=lambda x: x[1], reverse=True)
category_colors = {}
# Define a set of distinct colors for categories
color_palette = [
'#4CAF50', # Green
'#2196F3', # Blue
'#FF9800', # Orange
'#E91E63', # Pink
'#9C27B0', # Purple
'#00BCD4', # Cyan
'#FF5722', # Deep Orange
'#795548', # Brown
'#607D8B', # Blue Grey
'#FFC107', # Amber
'#8BC34A', # Light Green
'#3F51B5', # Indigo
'#F44336', # Red
'#009688', # Teal
'#CDDC39', # Lime
]
for i, (category, count) in enumerate(sorted_categories):
category_colors[category] = color_palette[i % len(color_palette)]
# Write category color mapping
with open('docs/category-colors.json', 'w') as f:
json.dump(category_colors, f, indent=2)
print(f"Generated color mapping for {len(category_colors)} categories")
EOF
- name: Generate timeline HTML
run: |
cat > docs/index.html << 'EOF'
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Ryan's Today I Learned Timeline</title>
<meta name="description" content="A chronological timeline of learning discoveries from software development, featuring insights on Python, Django, SQL, and more.">
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', system-ui, sans-serif;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
min-height: 100vh;
color: #333;
}
.container {
max-width: 1200px;
margin: 0 auto;
padding: 2rem;
}
.header {
text-align: center;
margin-bottom: 3rem;
color: white;
}
.header h1 {
font-size: 3rem;
margin-bottom: 1rem;
text-shadow: 2px 2px 4px rgba(0,0,0,0.3);
}
.header p {
font-size: 1.2rem;
opacity: 0.9;
max-width: 600px;
margin: 0 auto;
}
.filters {
background: rgba(255,255,255,0.95);
border-radius: 12px;
padding: 1.5rem;
margin-bottom: 2rem;
backdrop-filter: blur(10px);
border: 1px solid rgba(255,255,255,0.2);
}
.filter-group {
display: flex;
flex-direction: column;
gap: 1rem;
}
.filter-label {
font-weight: 600;
color: #666;
margin-bottom: 0.5rem;
}
.category-filters-container {
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
align-items: center;
}
.category-filter {
padding: 0.4rem 0.8rem;
border-radius: 20px;
border: 2px solid transparent;
background: #f8f9fa;
color: #666;
cursor: pointer;
transition: all 0.3s ease;
font-size: 0.9rem;
user-select: none;
}
.category-filter:hover {
transform: translateY(-2px);
box-shadow: 0 4px 8px rgba(0,0,0,0.1);
}
.category-filter.active {
color: white;
border-color: currentColor;
font-weight: 600;
}
.timeline {
position: relative;
margin-top: 2rem;
}
.timeline::before {
content: '';
position: absolute;
left: 2rem;
top: 0;
bottom: 0;
width: 2px;
background: linear-gradient(to bottom, #4CAF50, #2196F3, #FF9800, #E91E63);
}
.timeline-item {
position: relative;
margin-bottom: 2rem;
margin-left: 4rem;
background: white;
border-radius: 12px;
padding: 1.5rem;
box-shadow: 0 8px 25px rgba(0,0,0,0.1);
transition: all 0.3s ease;
opacity: 1;
}
.timeline-item.hidden {
display: none;
}
.timeline-item:hover {
transform: translateY(-5px);
box-shadow: 0 15px 35px rgba(0,0,0,0.15);
}
.timeline-item::before {
content: '';
position: absolute;
left: -3rem;
top: 2rem;
width: 16px;
height: 16px;
background: var(--category-color, #4CAF50);
border: 3px solid white;
border-radius: 50%;
box-shadow: 0 0 0 3px rgba(76, 175, 80, 0.3);
}
.timeline-header {
display: flex;
justify-content: space-between;
align-items: flex-start;
margin-bottom: 1rem;
flex-wrap: wrap;
gap: 1rem;
}
.timeline-title {
font-size: 1.4rem;
font-weight: 600;
color: #2c3e50;
text-decoration: none;
flex-grow: 1;
transition: color 0.3s ease;
}
.timeline-title:hover {
color: #3498db;
}
.timeline-meta {
display: flex;
gap: 1rem;
align-items: center;
flex-shrink: 0;
}
.timeline-date {
background: linear-gradient(135deg, #667eea, #764ba2);
color: white;
padding: 0.5rem 1rem;
border-radius: 20px;
font-size: 0.9rem;
font-weight: 500;
}
.timeline-category {
background: var(--category-color, #f8f9fa);
color: white;
padding: 0.4rem 0.8rem;
border-radius: 15px;
font-size: 0.8rem;
font-weight: 500;
border: 1px solid rgba(255,255,255,0.2);
}
.timeline-description {
color: #5a6c7d;
line-height: 1.6;
font-size: 1rem;
}
.loading {
text-align: center;
padding: 3rem;
color: white;
font-size: 1.2rem;
}
.error {
background: #f8d7da;
color: #721c24;
padding: 1rem;
border-radius: 8px;
border: 1px solid #f5c6cb;
margin-bottom: 2rem;
}
.stats {
background: rgba(255,255,255,0.95);
border-radius: 12px;
padding: 1.5rem;
margin-bottom: 2rem;
backdrop-filter: blur(10px);
border: 1px solid rgba(255,255,255,0.2);
}
.stats-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 1rem;
text-align: center;
}
.stat-item {
padding: 1rem;
}
.stat-number {
font-size: 2rem;
font-weight: bold;
color: #667eea;
display: block;
}
.stat-label {
color: #666;
font-size: 0.9rem;
margin-top: 0.5rem;
}
@media (max-width: 768px) {
.container {
padding: 1rem;
}
.header h1 {
font-size: 2rem;
}
.timeline::before {
left: 1rem;
}
.timeline-item {
margin-left: 2rem;
padding: 1rem;
}
.timeline-item::before {
left: -2rem;
}
.timeline-header {
flex-direction: column;
align-items: stretch;
}
.timeline-meta {
justify-content: space-between;
}
.filter-group {
gap: 1rem;
}
.category-filters-container {
justify-content: center;
}
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Today I Learned</h1>
<p>A chronological timeline of discoveries, learnings, and insights from my development journey</p>
</div>
<div class="stats" id="stats" style="display: none;">
<div class="stats-grid">
<div class="stat-item">
<span class="stat-number" id="totalEntries">0</span>
<div class="stat-label">Total Entries</div>
</div>
<div class="stat-item">
<span class="stat-number" id="categoriesCount">0</span>
<div class="stat-label">Categories</div>
</div>
<div class="stat-item">
<span class="stat-number" id="latestEntry">-</span>
<div class="stat-label">Latest Entry</div>
</div>
<div class="stat-item">
<span class="stat-number" id="filteredCount">0</span>
<div class="stat-label">Showing</div>
</div>
</div>
</div>
<div class="filters" id="filters" style="display: none;">
<div class="filter-group">
<span class="filter-label">Filter by category:</span>
<div class="category-filters-container" id="categoryFilters"></div>
</div>
</div>
<div class="loading" id="loading">Loading timeline...</div>
<div class="error" id="error" style="display: none;"></div>
<div class="timeline" id="timeline"></div>
</div>
<script>
let allEntries = [];
let categoryColors = {};
let activeCategory = null;
async function loadTimeline() {
try {
// Load timeline data and category colors
const [entriesResponse, colorsResponse] = await Promise.all([
fetch('timeline-data.json'),
fetch('category-colors.json')
]);
if (!entriesResponse.ok || !colorsResponse.ok) {
throw new Error('Failed to load timeline data');
}
allEntries = await entriesResponse.json();
categoryColors = await colorsResponse.json();
displayTimeline(allEntries);
displayStats(allEntries);
createCategoryFilters();
} catch (error) {
console.error('Error loading timeline:', error);
document.getElementById('loading').style.display = 'none';
document.getElementById('error').style.display = 'block';
document.getElementById('error').textContent = 'Failed to load timeline. Please try again later.';
}
}
function createCategoryFilters() {
const categories = [...new Set(allEntries.map(entry => entry.category))];
const filtersContainer = document.getElementById('categoryFilters');
// Add "All" filter
const allFilter = document.createElement('span');
allFilter.className = 'category-filter active';
allFilter.textContent = 'All';
allFilter.onclick = () => filterByCategory(null);
filtersContainer.appendChild(allFilter);
// Add category filters
categories.sort().forEach(category => {
const filter = document.createElement('span');
filter.className = 'category-filter';
filter.textContent = category;
filter.style.setProperty('--category-color', categoryColors[category] || '#666');
filter.onclick = () => filterByCategory(category);
filtersContainer.appendChild(filter);
});
document.getElementById('filters').style.display = 'block';
}
function filterByCategory(category) {
activeCategory = category;
// Update filter button states
document.querySelectorAll('.category-filter').forEach(filter => {
filter.classList.remove('active');
if ((category === null && filter.textContent === 'All') ||
filter.textContent === category) {
filter.classList.add('active');
if (category !== null) {
filter.style.background = categoryColors[category];
}
}
});
// Filter timeline items
const filteredEntries = category ?
allEntries.filter(entry => entry.category === category) :
allEntries;
displayTimeline(filteredEntries);
updateFilteredCount(filteredEntries.length);
}
function updateFilteredCount(count) {
document.getElementById('filteredCount').textContent = count;
}
function displayStats(entries) {
const categories = [...new Set(entries.map(entry => entry.category))];
const latest = entries.length > 0 ? new Date(entries[0].date).toLocaleDateString() : '-';
document.getElementById('totalEntries').textContent = entries.length;
document.getElementById('categoriesCount').textContent = categories.length;
document.getElementById('latestEntry').textContent = latest;
document.getElementById('filteredCount').textContent = entries.length;
document.getElementById('stats').style.display = 'block';
}
function displayTimeline(entries) {
const timelineContainer = document.getElementById('timeline');
const loading = document.getElementById('loading');
loading.style.display = 'none';
if (entries.length === 0) {
timelineContainer.innerHTML = '<p style="text-align: center; color: white;">No entries found.</p>';
return;
}
const timelineHTML = entries.map(entry => {
const date = new Date(entry.date).toLocaleDateString('en-US', {
year: 'numeric',
month: 'short',
day: 'numeric'
});
const categoryColor = categoryColors[entry.category] || '#666';
return `
<div class="timeline-item" style="--category-color: ${categoryColor}">
<div class="timeline-header">
<a href="${entry.url}" class="timeline-title" target="_blank" rel="noopener noreferrer">
${entry.title}
</a>
<div class="timeline-meta">
<span class="timeline-category" style="--category-color: ${categoryColor}; background: ${categoryColor};">
${entry.category}
</span>
<span class="timeline-date">${date}</span>
</div>
</div>
<div class="timeline-description">
${entry.description}
</div>
</div>
`;
}).join('');
timelineContainer.innerHTML = timelineHTML;
}
// Load timeline on page load
loadTimeline();
</script>
</body>
</html>
EOF
- name: Setup Pages
uses: actions/configure-pages@v4
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: 'docs'
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build-timeline
if: github.ref == 'refs/heads/main'
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
All told from start to finish, this took about 20 minutes. There are still some potential updates, but for 20 minutes of work I was able to take a 'wild' idea that I would have never been able to do before and had something that I'm actually excited about! And it has the added bonus of encouraging me to write more TILs because I now have this nice looking timeline of my TILs.
Fun with MCPs
Special Thanks to Jeff Triplett who provided an example that really got me started on better understanding of how this all works.
In trying to wrap my head around MCPs over the long Memorial weekend I had a breakthrough. I'm not really sure why this was so hard for me to grok, but now something seems to have clicked.
I am working with Pydantic AI and so I'll be using that as an example, but since MCPs are a standard protocol, these concepts apply broadly across different implementations.
What is Model Context Protocol (MCP)?
Per the Anthropic announcement (from November 2024!!!!)
The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.
What this means is that there is a standard way to extend models like Claude, or OpenAI to include other information. That information can be files on the file system, data in a database, etc.
(Potential) Real World Example
I work for a Healthcare organization in Southern California. One of the biggest challenges with onboarding new hires (and honestly can be a challenge for people that have been with the organization for a long time) is who to reach out to for support on which specific application.
Typically a user will send an email to one of the support teams, and the email request can get bounced around for a while until it finally lands on the 'right' support desk. There's the potential to have the applications themselves include who to contact, but some applications are vendor supplied and there isn't always a way to do that.
Even if there were, in my experience those are often not noticed by users OR the users will think that the support email is for non-technical issues, like "Please update the phone number for this patient" and not issues like, "The web page isn't returning any results for me, but it is for my coworker."
Enter an MCP with a Local LLM
Let's say you have a service that allows you to search through a file system in a predefined set of directories. This service is run with the following command
npx -y --no-cache @modelcontextprotocol/server-filesystem /path/to/your/files
In Pydantic AI the use of the MCPServerStdio is using this same syntax only it breaks it into two parts
- command
- args
The command is any application in your $PATH like uvx or docker or npx, or you can explicitly define where the executable is by calling out its path, like /Users/ryancheley/.local/share/mise/installs/bun/latest/bin/bunx
The args are the commands you'd pass to your application.
Taking the command from above and breaking it down we can set up our MCP using the following
MCPServerStdio(
"npx",
args=[
"-y",
"--no-cache",
"@modelcontextprotocol/server-filesystem",
"/path/to/your/files",
]
Application of MCP with the Example
Since I work in Healthcare, and I want to be mindful of the protection of patient data, even if that data won't be exposed to this LLM, I'll use ollama to construct my example.
I created a support.csv file that contains the following information
- Common Name of the Application
- URL of the Application
- Support Email
- Support Extension
- Department
I used the following prompt
Review the file
support.csvand help me determine who I contact about questions related to CarePath Analytics.
Here are the contents of the support.csv file
| Name | URL | Support Email | Support Extension | Department |
|---|---|---|---|---|
| MedFlow Solutions | https://medflow.com | support@medflow.com | 1234 | Clinical Systems |
| HealthTech Portal | https://healthtech-portal.org | help@medflow.com | 3456 | Patient Services |
| CarePath Analytics | https://carepath.io | support@medflow.com | 4567 | Data Analytics |
| VitalSign Monitor | https://vitalsign.net | support@medflow.com | 1234 | Clinical Systems |
| Patient Connect Hub | https://patientconnect.com | contact@medflow.com | 3456 | Patient Services |
| EHR Bridge | https://ehrbridge.org | support@medflow.com | 2341 | Integration Services |
| Clinical Workflow Pro | https://clinicalwf.com | support@medflow.com | 1234 | Clinical Systems |
| HealthData Sync | https://healthdata-sync.net | sync@medflow.com | 6789 | Integration Services |
| TeleHealth Connect | https://telehealth-connect.com | help@medflow.com | 3456 | Patient Services |
| MedRecord Central | https://medrecord.central | records@medflow.com | 5678 | Medical Records |
The script is below:
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "pydantic-ai",
# ]
# ///
import asyncio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.openai import OpenAIProvider
async def main():
# Configure the Ollama model using OpenAI-compatible API
model = OpenAIModel(
model_name='qwen3:8b', # or whatever model you have installed locally
provider=OpenAIProvider(base_url='http://localhost:11434/v1')
)
# Set up the MCP server to access our support files
support_files_server = MCPServerStdio(
"npx",
args=[
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/your/files" # Directory containing support.csv
]
)
# Create the agent with the model and MCP server
agent = Agent(
model=model,
mcp_servers=[support_files_server],
)
# Run the agent with the MCP server
async with agent.run_mcp_servers():
# Get response from Ollama about support contact
result = await agent.run(
"Review the file `support.csv` and help me determine who I contact about questions related to CarePath Analytics?" )
print(result.output)
if __name__ == "__main__":
asyncio.run(main())
As a user, if I ask, who do I contact about questions related to CarePath Analytics the LLM will search through the support.csv file and supply the email contact.
This example shows a command line script, and a Web Interface would probably be better for most users. That would be the next thing I'd try to do here.
Once that was done you could extend it to also include an MCP to write an email on the user's behalf. It could even ask probing questions to help make sure that the email had more context for the support team.
Some support systems have their own ticketing / issue tracking systems and it would be really valuable if this ticket could be written directly to that system. With the MCP this is possible.
We'd need to update the support.csv file with some information about direct writes via an API, and we'd need to secure the crap out of this, but it is possible.
Now, the user can be more confident that their issue will go to the team that it needs to and that their question / issue can be resolved much more quickly.
Uptime Kuma and Claude
While talking to a colleague of mine he had mentioned something called Uptime Kuma, which is an open source alternative to Uptime Robot. This was interesting for me because I have Uptime Robot checking a couple of my websites and I thought, "Hey, maybe I can use Uptime Kuma as a replacement for Uptime Robot."
I dove into the docs a bit and discovered that there was a Docker image for it, which was nice.
I then used Claude to take the docker run command
docker run -d \
--restart=always \
-p 3001:3001 \
-v uptime-kuma:/app/data \
--name uptime-kuma louislam/uptime-kuma:1
and turn it into a compose file, which gave me this
version: '3'
services:
uptime-kuma:
image: louislam/uptime-kuma:1
container_name: uptime-kuma
restart: always
ports:
- "3001:3001"
volumes:
- uptime-kuma:/app/data
volumes:
uptime-kuma:
Next, I took the output of that compose file and asked Claude to also generate a caddy configuration for me and it gave me this
monitor.yourdomain.com {
# Enable compression
encode gzip zstd
# Add security headers
header {
# Enable HSTS
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
# Prevent content-type sniffing
X-Content-Type-Options "nosniff"
# Referrer policy
Referrer-Policy "strict-origin-when-cross-origin"
}
# Reverse proxy to Uptime Kuma
reverse_proxy localhost:3001
}
I then SSH'd into one of my servers using Shellfish 1.
I updated the docker-compose.yml file and my Caddyfile to include what Claude had output.
I restarted my docker containers and didn't get my new container running.
So I took the whole Docker Compose file from my server and I put that into Claude and said,
Hey, is there anything wrong with my Docker Compose file?
It indicated that there were some issues and provided updates for. I made those changes and did the same thing with the Caddyfile. Again, Claude offered up some changes. I applied the recommended changes for the docker-compose.yml file and the Caddyfile stopped and started my docker containers.
I suddenly had an instance of Uptime Kuma. All in all, it took about a half hour from start to finish while I was watching a hockey game ... from my iPad.
I didn't really have to do anything other than a couple of tweaks here and there on the Docker Compose file and a couple of tweaks here and there on the Caddyfile. and I suddenly have this tool that allows me to monitor the uptime of various websites that I'm interested in.
As I wrapped up it hit me ... holy crap, this is an amazing time to live2. You have an idea, Claude (or whatever AI tool you want to use) outputs a thing, and then you're up and running. This really reduces that barrier to entry to just try new things.
Is the Docker Compose file the most performant? I don't know. Is the Caddyfile the most secured lockdown thing? I don't know.
But for these small projects that are just me, I don't know how much it really matters.
uv and pip
On Sunday November 3 I posted this to Mastodon:
I've somehow managed to get Python on my macbook to not install packages into the virtual environment I've activated and I'm honestly not sure how to fix this.
Has anyone else ever run into this problem? If so, any pointers on how to fix it?
I got lots of helpful replies and with those replies I was able to determine what the issue was and 'fix' it.
A timeline of events
I was working on updating a library of mine and because it had been a while since it had been worked on, I had to git clone it locally. When I did that I then set out to try uv for the virtual environment management.
This worked well (and was lightning FAST) and I was hacking away at the update I wanted to do.
Then I had a call with my daughter to review her upcoming schedule for the spring semester. When I got back to working on my library I kind of dove right in and started to get an error messages about the library not being installed
zsh: command not found: the-well-maintained-test
So I tried to install it (though I was 100% sure it was already there) and got this message
ERROR: Could not find an activated virtualenv (required).
I deleted the venv directory and started over again (using uv still) and ran into the same issue.
I restarted my Mac (at my day job I use Windows computers and this is just a natural reaction to do when something doesn't work the way I think it should1)
That didn't fix the issue 😢
I spent the next little while certain that in some way pipx or pyenv had jacked up my system, so I uninstalled them ... now you might ask why I thought this, and dear reader, I have no f$%&ing clue.
With those pesky helpers out of the way, pip still wasn't working the way I expected it to!
I then took to Mastodon and with this one response I saw what I needed
@ryancheley Are you running python -m pip install... Or just pip install...? If that's a venv created by uv, pip isn't installed I think, so 'pip install' might resolve to a pip in a different python installation
I went back to my terminal, and sure enough that was the issue. I haven't used uv enough to get a real sense of it, and when I was done talking with my daughter, my brain switched to Python programming, but it forgot that I had used uv to set everything up.
Lessons learned
This was a good lesson but I'm still unsure about a few things:
- How do I develop a cli using
uv? - Why did it seem that my cli testing worked fine right up until the call with my daughter, and now it seems that I can't develop cli's with
uv?
I did write a TIL for this but I discovered that
uv venv venv
is not a full replacement for
python -m venv venv
Specifically uv does not include pip, which is what contributed to my issues. You can include pip by running this command though
uv venv venv --seed
Needless to say, with the help of some great people on the internet I got my issue resolved, but I did spend a good portion of Monday evening un-f$%&ing my MacBook Pro by reinstalling pyenv, and pipx2 ... and cleaning up my system Python for 3.12 and 3.13 ... turns out Homebrew REALLY doesn't want you to do anything with the system Python, even if you accidentally installed a bunch of cruft in there accidentally.
- Yes this is dumb, and yes I hate it ↩︎
- As of this writing I've uninstalled pipx because
uvcan replace it too. See Jeff Triplett's post uv does everything ↩︎
Migrating django-tailwind-cli to Django Commons
On Tuesday October 29 I worked with Oliver Andrich, Daniel Moran and Storm Heg to migrate Oliver's project django-tailwind-cli from Oliver's GitHub project to Django Commons.
This was the 5th library that has been migrated over, but the first one that I 'lead'. I was a bit nervous. The Django Commons docs are great and super helpful, but the first time you do something, it can be nerve wracking.
One thing that was super helpful was knowing that Daniel and Storm were there to help me out when any issues came up.
The first set up steps are pretty straight forward and we were able to get through them pretty quickly. Then we ran into an issue that none of us had seen previously.
django-tailwind-cli had initially set up GitHub Pages set up for the docs, but migrated to use Read the Docs. However, the GitHub pages were still set in the repo so when we tried to migrate them over we ran into an error. Apparently you can't remove GitHub pages using Terraform (the process that we use to manage the organization).
We spent a few minutes trying to parse the error, make some changes, and try again (and again) and we were able to finally successfully get the migration completed 🎉
Some other things that came up during the migration was a maintainer that was set in the front end, but not in the terraform file. Also, while I was making changes to the Terraform file locally I ran into an issue with an update that had been done in the GitHub UI on my branch which caused a conflict for me locally.
I've had to deal with this kind of thing before, but ... never with an audience! Trying to work through the issue was a bit stressful to say the least 😅
But, with the help of Daniel and Storm I was able to resolve the conflicts and get the code pushed up.
As of this writing we have 6 libraries that are part of the Django Commons organization and am really excited for the next time that I get to lead a migration. Who knows, at some point I might actually be able to do one on my own ... although our hope is that this can be automated much more ... so maybe that's what I can work on next
Working on a project like this has been really great. There are such great opportunities to learn various technologies (terraform, GitHub Actions, git) and getting to work with great collaborators.
What I'm hoping to be able to work on this coming weekend is1:
- Get a better understanding of Terraform and how to use it with GitHub
- Use Terraform to do something with GitHub Actions
- Try and create a merge conflict and then use the git cli, or Git Tower, or VS Code to resolve the merge conflict
For number 3 in particular I want to have more comfort for fixing those kinds of issues so that if / when they come up again I can resolve them.
- Now will I actually be able to 🤷🏻 ↩︎
Django Commons
First, what are "the commons"? The concept of "the commons" refers to resources that are shared and managed collectively by a community, rather than being owned privately or by the state. This idea has been applied to natural resources like air, water, and grazing land, but it has also expanded to include digital and cultural resources, such as open-source software, knowledge databases, and creative works.
As Organization Administrators of Django Commons, we're focusing on sustainability and stewardship as key aspects.
Asking for help is hard, but it can be done more easily in a safe environment. As we saw with the xz utils backdoor attack, maintainer burnout is real. And while there are several arguments about being part of a 'supply chain' if we can, as a community, offer up a place where maintainers can work together for the sustainability and support of their packages, Django community will be better off!
From the README of the membership repo in Django Commons
Django Commons is an organization dedicated to supporting the community's efforts to maintain packages. It seeks to improve the maintenance experience for all contributors; reducing the barrier to entry for new contributors and reducing overhead for existing maintainers.
OK, but what does this new organization get me as a maintainer? The (stretch) goal is that we'll be able to provide support to maintainers. Whether that's helping to identify best practices for packages (like requiring tests), or normalize the idea that maintainers can take a step back from their project and know that there will be others to help keep the project going. Being able to accomplish these two goals would be amazing ... but we want to do more!
In the long term we're hoping that we're able to do something to help provide compensation to maintainers, but as I said, that's a long term goal.
The project was spearheaded by Tim Schilling and he was able to get lots of interest from various folks in the Django Community. But I think one of the great aspects of this community project is the transparency that we're striving for. You can see here an example of a discussion, out in the open, as we try to define what we're doing, together. Also, while Tim spearheaded this effort, we're really all working as equals towards a common goal.
What we're building here is a sustainable infrastructure and community. This community will allow packages to have a good home, to allow people to be as active as they want to be, and also allow people to take a step back when they need to.
Too often in tech, and especially in OSS, maintainers / developers will work and work and work because the work they do is generally interesting, and has interesting problems to try and solve.
But this can have a downside that we've all seen .. burnout.
By providing a platform for maintainers to 'park' their projects, along with the necessary infrastructure to keep them active, the goal is to allow maintainers the opportunity to take a break if, or when, they need to. When they're ready to return, they can do so with renewed interest, with new contributors and maintainers who have helped create a more sustainable environment for the open-source project.
The idea for this project is very similar to, but different from, Jazz Band. Again, from the README
Django Commons and Jazzband have similar goals, to support community-maintained projects. There are two main differences. The first is that Django Commons leans into the GitHub paradigm and centers the organization as a whole within GitHub. This is a risk, given there's some vendor lock-in. However, the repositories are still cloned to several people's machines and the organization controls the keys to PyPI, not GitHub. If something were to occur, it's manageable.
The second is that Django Commons is built from the beginning to have more than one administrator. Jazzband has been working for a while to add additional roadies (administrators), but there hasn't been visible progress. Given the importance of several of these projects it's a major risk to the community at large to have a single point of failure in managing the projects. By being designed from the start to spread the responsibility, it becomes easier to allow people to step back and others to step up, making Django more sustainable and the community stronger.
One of the goals for Django Commons is to be very public about what's going on. We actively encourage use of the Discussions feature in GitHub and have several active conversations happening there now1 2 3
So far we've been able to migrate ~3~ 4 libraries4 5 6 7into Django Commons. Each one has been a great learning experience, not only for the library maintainers, but also for the Django Commons admins.
We're working to automate as much of the work as possible. Daniel Moran has done an amazing job of writing Terraform scripts to help in the automation process.
While there are still several manual steps, with each new library, we discover new opportunities for automation.
This is an exciting project to be a part of. If you're interested in joining us you have a couple of options
- Transfer your project into Django Commons
- Join as member and help contribute to one of the projects that's already in Django Commons
I'm looking forward to seeing you be part of this amazing community!
DjangoCon US 2024 Talk
At DjangoCon US 2023 I gave a talk, and wrote about my experience preparing for that talk
Well, I spoke again at DjangoCon US this year (2024) and had a similar, but wildly different experience in preparing for my talk.
Last year I lamented that I didn't really track my time (which is weird because I track my time for ALL sorts of things!).
This year, I did track my time and have a much better sense of how much time I prepared for the talk.
Another difference between each year is that in 2023 I gave a 45 minute talk, while this year my talk was 25 minutes.
I've heard that you need about 1 hour of prep time for each 1 minute of talk that you're going to give. That means that, on average, for a 25 minute talk I'd need about 25 hours of prep time.
My time tracking shows that I was a little short of that (19 hours) but my talk ended up being about 20 minutes, so it seems that maybe I was on track for that.
This year, as last year, my general prep technique was to:
- Give the presentation AND record it
- Watch the recording and make notes about what I needed to change
- Make the changes
I would typically do each step on a different day, though towards the end I would do steps 2 and 3 on the same day, and during the last week I would do all of the steps on the same day.
This flow really seems to help me get the most of out practicing my talk and getting a sense of its strengths and weaknesses.
One issue that came up a week before I was to leave for DjangoCon US is that my boss said I couldn't have anything directly related to my employer in the presentation. My initial drafts didn't have specifics, but the examples I used were too close for my comfort on that, so I ended up having to refactor that part of my talk.
Honestly, I think it came out better because of it. During my practice runs I felt like I was kind of dancing around topics, but once I removed them i felt freer to just kind of speak my mind.
Preparing and giving talks like these are truly a ton of work. Yes, you'll (most likely) be given a free ticket to the conference you're speaking at — but unless you're a seasoned public speaker you will have to practice a lot to give a great talk.
One thing I didn't mention in my prep time is that my talk was essentially just a rendition of my series of blog posts I started writing at DjangoCon US 2023 (Error Culture)
So when you add in the time it took for me to brainstorm those articles, write, and edit them, we're probably looking at another 5 - 7 hours of prep.
This puts me closer to the 25 hours of prep time for the 25 minute talk.
I've given 2 talks so far, and after each one I've said, 'Never again!'
It's been a few weeks since I gave my talk, and I have to say, I'm kind of looking forward to trying to give a talk again next year. Now, I just need to figure out what I would talk about that anyone would want to hear. 🤔
Page 1 / 13