sc
Scott avatar
ttwillsey

/Now Page Update – Podcast Episode Artwork

Raycast

Part of the Raycast series

I made a change yesterday to my /now page to make the Podcasts section look nicer by including podcast artwork next to the episode entries. I think it adds visual interest, and (most importantly) it was fun to update my recent podcast history script to include it.

This change was partially spurred by me switching from Overcast as my podcast app to Apple Podcasts due to a couple Overcast bugs that were annoying me. However, switching podcast apps did mean updating my /now page Podcast episodes would be a bit more manual, due to Apple Podcasts not saving the history in a way that I can retrieve via automation. I started looking at using the iTunes API to grab episode information, and while doing so, I figured I may as well perform the simple chore of also grabbing the artwork along with the other episode information.

But then Marco updated Overcast a couple days ago and it fixed some of the bugs that were bugging me, so I bugged right back out of Apple Podcasts land and back to Overcast.1 At this point, I could have just gone back to my Recent Overcast History Raycast script command with no changes, and no one would have ever noticed based on what appears on my /now page, but the thought of showing podcast artwork as part of my recent listening history stuck with me. I decided I had to do it.

Understanding how this works requires a little background on other parts of this script. Rather than bore you to death with ALL the details, I’ll just give you some relevant code snippets and then give full python module code dumps at the end. First though, here’s a before and after comparison of my recent podcast listening history.

Then: ThenNowPodcasts

Now: NowNowPodcasts

Podcast Listening History Overview

Basically the way I automate grabbing my recent podcast listening history for my /now page goes like this:

  1. Log into my Overcast account and download my Overcast history OPML file.
  2. Parse the OPML file for all podcast-episode elements with the played attribute set to 1.
  3. Filter these down to a specfic number of days worth, order them with most recently listened first, and then filter them again so that only the last 10 of those are returned.
  4. For each episode in this list, load the overcast episode details page and parse the URL of the podcast artwork image. Download it and shove it in a specific folder. Then create the markdown links (including images) for the episode for display on the /now page and append it to the full list of episode links.
  5. Put this list of markdown links on the clipboard.
  6. Run a Retrobatch action to scale the artwork way down and put it in the correct folder in the website source for site compilation.

All the above happens automatically when I run the script. I have it set up as a Raycast script command, so I trigger it to happen. I could have it update itself on a specific basis and automatically update, compile, and push the site, but I don’t. But that’s a logical series of next steps for anyone who wants a fully automated experience.

The Raycast Script Command

Here’s the Raycast script command. It’s called Recent Overcast History. Note that any Raycast script command can be run outside of Raycast like any other script of its kind (bash, in this case), but I do call this through Raycast.

#!/bin/bash
# Required parameters:
# @raycast.schemaVersion 1
# @raycast.title Recent Overcast History
# @raycast.mode fullOutput
# Optional parameters:
# @raycast.icon ../icons/Overcast.png
# @raycast.argument1 { "type": "dropdown", "placeholder": "Choose Mode", "data": [ { "title": "Normal", "value": "normal" }, { "title": "Force", "value": "force" } ] }
# @raycast.packageName Website
# Documentation:
# @raycast.description Grabs OC history and gets the most recent listened or in progress from it
# @raycast.author scott_willsey
# @raycast.authorURL https://raycast.com/scott_willsey
TYPE=$1
if [ "$TYPE" = 'force' ]; then
/Users/scott/Scripts/scottwillsey/scottwillsey-overcast-history/main.py -f
else
/Users/scott/Scripts/scottwillsey/scottwillsey-overcast-history/main.py
fi
/Applications/Retrobatch.app/Contents/MacOS/Retrobatch --workflow /Users/scott/Scripts/scottwillsey/scottwillsey-images/OCPodcastArtworkResize.retrobatch &
wait
# delete all files in /Users/scott/Scripts/scottwillsey/scottwillsey-overcast-history/images
rm -f /Users/scott/Scripts/scottwillsey/scottwillsey-overcast-history/images/*

Recent Overcast History has one dropdown parameter I need to choose: “Normal” or “Force”. I have this because if it’s been less than 2 days, I don’t redownload the OPML file unless I choose “Force”. One nice thing about Raycast is it always remembers what your last choice was and defaults to that.

The Python Scripts

Recent Overcast History’s first action is to run scottwillsey-overcast-history/main.py, which is responsible for initiating the series of steps I outlined above.

main.py
#!/Users/scott/Scripts/python/venv/bin/python
import argparse
import os
from datetime import datetime, timedelta
import constants as const
from oc_history import load_oc_history
from oc_opml_parse import oc_opml_parse
p = argparse.ArgumentParser()
p.add_argument('-f', '--force', action='store_true', help='Force local OPML file parsing')
args = p.parse_args()
def file_is_old(file_path):
if not os.path.exists(file_path):
return True
file_mod_date = os.path.getmtime(file_path)
display_date = datetime.fromtimestamp(file_mod_date)
print(f'OPML file created on {display_date.strftime("%Y-%m-%d")}')
file_datetime = datetime.fromtimestamp(file_mod_date)
print(f'file_datetime = {file_datetime}')
stale_date = datetime.now() - timedelta(days=const.OPML_AGE_LIMIT_DAYS)
print(f'stale_date = {stale_date}')
return file_datetime < stale_date
def main():
history_was_loaded = False
if file_is_old(const.OPML_FILE_PATH):
print(f'OPML file is older than {const.OPML_AGE_LIMIT_DAYS} days or doesn\'t exist. Downloading new data...')
history_was_loaded = load_oc_history()
else:
print(f'OPML file is less than {const.OPML_AGE_LIMIT_DAYS} days old. Skipping download.')
if history_was_loaded or args.force:
print('Parsing OPML file...')
if oc_opml_parse():
print('Done!')
else:
print('You have to update your podcast list manually.')
else:
print('No new Overcast history generated.')
if __name__ == "__main__":
main()

I’m not going to go through all the scripts like the login script and the downloading of the OPML file, but the OPML parsing script is interesting so you can see how I’m finding the episode history I want, getting podcast artwork for it, and then creating the markdown for each episode that goes on the /now page.

oc_opml_parse.py
import pyperclip
import xml.etree.ElementTree as ET
import constants as const
from datetime import datetime, timezone, timedelta
from oc_artwork import load_episode_artwork
def find_podcast_name(root, episode_id):
for podcast in root.findall(".//outline[@type='rss']"):
for ep in podcast.findall("outline[@type='podcast-episode']"):
if ep.get('overcastId') == episode_id:
return podcast.get('text')
return "Unknown"
def oc_opml_parse():
with open(const.OPML_FILE_PATH, 'r') as f:
content = f.read()
try:
with open(const.OPML_FILE_PATH, 'r') as f:
content = f.read()
except FileNotFoundError:
print(f"File not found: {const.OPML_FILE_PATH}")
return None
root = ET.fromstring(content)
# Find all podcast episode entries
episodes = root.findall(".//outline[@type='podcast-episode']")
current_date = datetime.now(timezone.utc)
# Filter episodes with played="1"
# played_episodes = [ep for ep in episodes if ep.get('played') == '1']
played_episodes = [
ep for ep in episodes
if ep.get('played') == '1' and
(current_date - datetime.strptime(ep.get('userUpdatedDate'), "%Y-%m-%dT%H:%M:%S%z")).days <= (const.OPML_AGE_LIMIT_DAYS + 1)
]
# Sort episodes by userUpdatedDate, most recent first
played_episodes.sort(key=lambda ep: datetime.strptime(ep.get('userUpdatedDate'), "%Y-%m-%dT%H:%M:%S%z"), reverse=True)
# Get the most recent episodes
top_episodes = played_episodes[:const.NUMBER_OF_EPISODES]
# Print the results
episodes_list = ""
for ep in top_episodes:
episode_artwork = load_episode_artwork(ep.get('overcastId'), ep.get('overcastUrl'))
image_link = f"[![{ep.get('title')}]({episode_artwork})]({ep.get('overcastUrl')})" if episode_artwork else ""
episodes_list += f"{image_link}[{find_podcast_name(root, ep.get('overcastId'))}{ep.get('title')}]({ep.get('overcastUrl')})\n"
print(episodes_list)
pyperclip.copy(episodes_list)
return True

All the real fun happens in the function oc_opml_parse(), which is called by main.py. First it tries to open the OPML file and then uses the xml.etree.ElementTree module to get the XML model of the data.

def oc_opml_parse():
with open(const.OPML_FILE_PATH, 'r') as f:
content = f.read()
try:
with open(const.OPML_FILE_PATH, 'r') as f:
content = f.read()
except FileNotFoundError:
print(f"File not found: {const.OPML_FILE_PATH}")
return None
root = ET.fromstring(content)

Next, it parses that for the list of episodes and starts winnowing them down to played episodes, ordered by most recent first, covering a specific timespan, and then returns the first 10 episodes on that list.

# Find all podcast episode entries
episodes = root.findall(".//outline[@type='podcast-episode']")
current_date = datetime.now(timezone.utc)
# Filter episodes with played="1"
# played_episodes = [ep for ep in episodes if ep.get('played') == '1']
played_episodes = [
ep for ep in episodes
if ep.get('played') == '1' and
(current_date - datetime.strptime(ep.get('userUpdatedDate'), "%Y-%m-%dT%H:%M:%S%z")).days <= (const.OPML_AGE_LIMIT_DAYS + 1)
]
# Sort episodes by userUpdatedDate, most recent first
played_episodes.sort(key=lambda ep: datetime.strptime(ep.get('userUpdatedDate'), "%Y-%m-%dT%H:%M:%S%z"), reverse=True)
# Get the most recent episodes
top_episodes = played_episodes[:const.NUMBER_OF_EPISODES]

Finally, it creates the markdown for all of the episodes in the list, including loading the Overcast episode page for each episode and finding the podcast artwork from it.

# Print the results
episodes_list = ""
for ep in top_episodes:
episode_artwork = load_episode_artwork(ep.get('overcastId'), ep.get('overcastUrl'))
image_link = f"[![{ep.get('title')}]({episode_artwork})]({ep.get('overcastUrl')})" if episode_artwork else ""
episodes_list += f"{image_link}[{find_podcast_name(root, ep.get('overcastId'))}{ep.get('title')}]({ep.get('overcastUrl')})\n"
print(episodes_list)
pyperclip.copy(episodes_list)
return True

Here’s what that load_episode_artwork part looks like:

oc_artwork.py
import os
import constants as const
from session import session
import re
import uuid
def load_episode_artwork(episode_id, episode_url):
# Get the overcast episode page
r = session.get(episode_url)
if r.status_code == const.SUCCESS:
print('Overcast episode page downloaded')
# search the contents of r.text for podcast artwork and save the first match to a variable
match = re.search(r'<img class="art fullart" src="(https:\/\/public\.overcast-cdn\.com\/art\/.*)"\/>', r.text)
if match:
# Get the URL of the artwork
artwork_url = match.group(1)
print(f'Artwork URL: {artwork_url}')
# Download and save the artwork to a file
art = session.get(artwork_url)
if art.status_code == const.SUCCESS:
print('Artwork downloaded')
# Save the artwork to a file
# generate a file name including UUID
file_name = f'{episode_id}-{uuid.uuid4()}.png'
artwork_file_path = os.path.join(const.OC_ARTWORK_FILE_PATH, file_name)
with open(artwork_file_path, 'wb') as f:
f.write(art.content)
print(f'Artwork saved to {artwork_file_path}')
artwork_url = f'../../assets/images/oc_artwork/{file_name}'
return artwork_url
else:
print(f'Failed to download artwork. Status code: {art.status_code}')
else:
print('No artwork URL found on episode page.')
else:
print(f'Failed to download Overcast episode page. Status code: {r.status_code}')
return None

Because the image URL for the artwork always contains https://public.overcast-cdn.com/art/, it’s always a simple thing to find it with a regular expression and then download that file. I name the artwork files as the Overcast episode ID with a UUID appended. I save those to a folder for processing by Retrobatch.

The Retrobatch Script

Remember how the script command runs the Python script and then runs a Retrobatch action (script command included below for memory-jogging purposes)?

Terminal window
TYPE=$1
if [ "$TYPE" = 'force' ]; then
/Users/scott/Scripts/scottwillsey/scottwillsey-overcast-history/main.py -f
else
/Users/scott/Scripts/scottwillsey/scottwillsey-overcast-history/main.py
fi
/Applications/Retrobatch.app/Contents/MacOS/Retrobatch --workflow /Users/scott/Scripts/scottwillsey/scottwillsey-images/OCPodcastArtworkResize.retrobatch &
wait
# delete all files in /Users/scott/Scripts/scottwillsey/scottwillsey-overcast-history/images
rm -f /Users/scott/Scripts/scottwillsey/scottwillsey-overcast-history/images/*

Here’s what the Retrobatch action looks like:

RetrobatchOCArtwork

It grabs images from /Users/scott/Scripts/scottwillsey/scottwillsey-overcast-history/images, where the Python script saved them, resizes them to 250 pixels wide, with corresponding correct height, and saves them to my website source’s oc_artwork directory for inclusion when the site is compiled.

After that, the script command deletes the source images from /Users/scott/Scripts/scottwillsey/scottwillsey-overcast-history/images.

The Markdown

The result of ALL OF THIS is Markdown that looks like this:

[![Crypto in Kentucky: The next extraction](../../assets/images/oc_artwork/5340766157298005-66bbb48e-a6c9-4817-85f8-3c2284514230.png)](https://overcast.fm/+BL5ZgWiWVU)[Click Here – Crypto in Kentucky: The next extraction](https://overcast.fm/+BL5ZgWiWVU)
![Subscriber Comment Show: Don’t Read Our Emails With Your Kids, Maybe](../../assets/images/oc_artwork/5121671711622378-e7b941aa-1c52-45f6-bf90-641d33eb41cb.png)The 404 Media Podcast (Premium Feed) – Subscriber Comment Show: Don’t Read Our Emails With Your Kids, Maybe
[![Is Elon Musk Really Stepping Back from DOGE?](../../assets/images/oc_artwork/4904748076882358-8c4c7d47-aa70-4967-b316-e7fd54a2424f.png)](https://overcast.fm/+BFs16bV2bY)[Uncanny Valley | WIRED – Is Elon Musk Really Stepping Back from DOGE?](https://overcast.fm/+BFs16bV2bY)
[![421: ‘The Ratchet of Flippancy’, With Craig Mod](../../assets/images/oc_artwork/135464155213596-e66d96b8-bbcd-4557-a110-4540d876f3d0.png)](https://overcast.fm/+B7NDTZ7xw)[The Talk Show With John Gruber – 421: ‘The Ratchet of Flippancy’, With Craig Mod](https://overcast.fm/+B7NDTZ7xw)
[![546: Dave And/Or Buster’s](../../assets/images/oc_artwork/245011785900358-eae0f377-2a82-407f-96d0-49f9494a93e0.png)](https://overcast.fm/+De1kB24UY)[The Rebound – 546: Dave And/Or Buster’s](https://overcast.fm/+De1kB24UY)
![CarPlay Ultra and legal localization (6C Podcast)](../../assets/images/oc_artwork/2092466581013757-93182e76-be62-41ab-897a-fdaf4987f622.png)Six Colors – CarPlay Ultra and legal localization (6C Podcast)
![Airbnb 2025 Summer Release: An Interview with Jud Coplan and Teo Connor](../../assets/images/oc_artwork/5561968007797863-3e9a8de1-8419-43d1-8589-dc732b60d920.png)AppStories+ – Airbnb 2025 Summer Release: An Interview with Jud Coplan and Teo Connor
[![PBS 179 of X — Introducing Liquid Templates in GitHub Pages with Jekyll](../../assets/images/oc_artwork/640362802650572-686da9c0-ce7a-4559-b04e-08ca7c590250.png)](https://overcast.fm/+JGaBVhHcw)[Programming By Stealth – PBS 179 of X — Introducing Liquid Templates in GitHub Pages with Jekyll](https://overcast.fm/+JGaBVhHcw)
[![Between Two Nerds: Why hackers and spies don’t mix](../../assets/images/oc_artwork/4031462282408809-7fc44fa7-0ef1-4f03-942b-f8f052be6ffc.png)](https://overcast.fm/+5Sl_VEs2k)[Risky Bulletin – Between Two Nerds: Why hackers and spies don’t mix](https://overcast.fm/+5Sl_VEs2k)

You may have noticed, if you’re the type of person who just loves staring at and parsing markdown listings, that not all of these actually link to the Overcast episode. I remove the links for paid podcast episodes. I don’t pay for a lot of podcasts, but I do pay for some.

And that’s it! It all happens instantly and automatically when I fire up the script command from Raycast!

OvercastScriptCommand

Footnotes

  1. I apologize unreservedly for that sentence.

Raycast AI as Translator

Raycast

Part of the Raycast series

Here’s a use case for AI that I really like: a Japanese to English translator that gives me a translation and breakdown of the kanji (Chinese characters) in a Japanese phrase or passage. The reason AI is really good for this as opposed to Japanese apps that let you paste in phrases or do text detection in images is that you get translation, the ability to specify how you want the results returned, the image text detection, AND the ability to ask follow-up questions.

I created both a Raycast command and a Raycast AI chat preset with instructions on translation from Japanese to English.

The Raycast AI Command:

J-E Command

The Raycast AI Chat Preset:

J-E Chat Preset

The command opens in Raycast Quick AI view and uses the contents of the clipboard as the source (including screenshots or photos). The chat preset is for the separate AI chat window and I can paste or attach whatever I like as the source.

Here’s an example use case:

I took a screenshot of an easy Japanese lesson from JapanesePod101.com and gave it to the chat preset to translate.

JPhrase

やめてよ。「野沢」か、「菜味」って呼んでって何度も言っているでしょう。

Here’s the full translation result, including a follow-up question of mine as to why it used one specific word for “many times” instead of another.

Notification: Preset: J to E translator
---
Scott Willsey:
---
Raycast AI: Okay, here is the breakdown and translation:
**English Translation:**
Stop it! I've told you so many times to call me "Nozawa" or "Nami", haven't I?
**Japanese Transcription (Kana):**
やめてよ。「のざわ」か、「なみ」ってよんでってなんどもいっているでしょう。
(Yamete yo. "Nozawa" ka, "Nami" tte yonde tte nando mo itte iru deshou.)
**Kanji Breakdown:**
| Kanji | Reading(s) | Meaning | Compound Word(s) | Compound Reading | Compound Meaning |
| :------- | :--------- | :---------------------------- | :--------------- | :--------------- | :----------------------- |
|| no, ya | field, plain, rustic, wild | 野沢 | Nozawa | Surname: Nozawa |
|| sawa, zawa | swamp, marsh, stream | 野沢 | Nozawa | Surname: Nozawa |
|| na, sai | vegetables, greens | 菜味 | Nami | Given Name: Nami* |
|| mi, aji | taste, flavor | 菜味 | Nami | Given Name: Nami* |
|| yo(bu), ko | call, invite, breathe out | 呼んで | yonde | Call (te-form of 呼ぶ) |
|| nan, nani | what, how many | 何度も | nando mo | many times, repeatedly |
|| do, tabi | degrees, occurrence, time | 何度も | nando mo | many times, repeatedly |
|| i(u), koto | say, word, speech, statement | 言っている | itte iru | are saying, have said |
*\*Note: 菜味 (Nami) is one possible reading for this name. Other readings exist.*
---
Scott Willsey: why 何度 intead of 何回 ?
---
Raycast AI: That's a great question! While both 何度 (nando) and 何回 (nankai) translate to "how many times" or "how often," there's a subtle difference in nuance and usage.
1. **何回 (nankai): Focus on Countable Occurrences**
* This tends to focus more on the specific, countable *number* of times an action happened or will happen. It's often used when you could potentially answer with a precise number (even if the speaker doesn't know it).
* Think of it like counting discrete events.
* Example: 「この映画、**何回**見ましたか?」 (Kono eiga, **nankai** mimashita ka?) - "How many times have you seen this movie?" (You could answer: 3 times, 5 times, etc.)
2. **何度 (nando): Focus on Repetition/Frequency (Slightly More Abstract/Emphatic)**
* This often emphasizes the *repetition* or *frequency* itself, rather than just the exact count. It can feel slightly more abstract or carry a bit more emotional weight (like surprise, exasperation, or gratitude for repeated actions).
* It's very common in the pattern **何度も (nando mo)**, meaning "many times," "again and again," or "repeatedly."
* Example: 「**何度も**すみません。」 ( **Nando mo** sumimasen.) - "Sorry [for bothering you] so many times." (Focus is on the repeated nature of the bother).
* Example: 「**何度**言ったらわかるの?」 ( **Nando** ittara wakaru no?) - "How many times do I have to tell you?" (Expresses frustration at the repetition needed).
**In the Sentence:**
> やめてよ。「野沢」か、「菜味」って呼んでって**何度も**言っているでしょう。
> (Yamete yo. "Nozawa" ka, "Nami" tte yonde tte **nando mo** itte iru deshou.)
Here, **何度も (nando mo)** is used. This emphasizes the *repeated nature* of the speaker telling the listener how to address them. It conveys a sense of "I've told you this *again and again*." The focus isn't necessarily on *exactly* how many times (was it 5 times? 10 times?), but on the fact that it has happened *repeatedly*, implying frustration.
While you *could* technically say 「何回も言っているでしょう」 (nankai mo itte iru deshou), **何度も** feels slightly more natural and common in this specific context of expressing exasperation about repeated instructions or warnings. It highlights the *frequency* and the speaker's feeling about it.

Not only does this give me a full breakdown of the kanji, including their compound and individual meanings, but it lets me ask questions about specific parts of the sentence.

Bear in mind I lived in Japan for 8 years and I’m going to know when an LLM gives me something bogus or tries to lead me astray. But this is good for reminding me how specific conjugations work or the differences in terms that I might not understand.

I was a kid when I lived in Japan, and I never formally studied Japanese. My learning was based on intuitive understanding as I played with other kids, watched TV, and spent time listening to people around me talking. Inevitably I didn’t learn some nuances or made incorrect assumptions that then became part of my understanding of Japanese. That’s why use cases like this, with a custom kanji breakdown instruction and the ability to ask questions about the translation or the source phrase, are super handy for me.

You can learn more about Raycast AI, Raycast AI commands, and Raycast AI chat presets here.

Slowly Posting Reviews

I’ve written about my review automation workflow in some detail, and while I personally find it fascinating, really the point is the reviews themselves. I’m slowly getting some up there and will keep adding more slowly and steadily over time.

Here’s what my current Music Reviews page looks like. The goal is to start adding them in a way that creates a diverse mix of music while I build up the number of reviews available.

MusicReviewsWIP

Let me point you again to Matt Birchler’s Quick Reviews. I use his iOS app version of this application to generate the reviews and then my Mac takes over from there! It’s a great workflow, and I like the looks of the results.

IndieWeb is where the fun is… if you like the random stuff I throw on my site, think about how to make your own or what you want it to be to try to make life just a little bit happier for a minute or two for anyone who stumbles across you. We could all use some good stuff in life right now.

Media Reviews Are Now in Search Results

Astro

Part of the Astro series

When I added a Reviews page, I wanted to be able to have individual reviews show up in my site search. Due to some complexities I won’t go into of how Pagefind indexes things and how I optimize my images using the Astro Image component, this was easier said than done – until I remembered the good old HTML hidden global attribute. Then I just made a hidden span and populated it with the image alt text.

<div class="review">
<p>
{
page.data.map((book) => (
<span hidden>{`${book.alttext}`}</span>
<a href={`/images/posts/${book.review}.jpg`}>
<Image
src={import(`../../../assets/images/posts/${book.review}.png`)}
alt={`${book.alttext}`}
/>
</a>
))
}
</p>
</div>

Now all my media reviews show up in site seach results!

SearchReviewsResults

Enjoy the reviews, and don’t forget there are separate categories for Book Reviews, Movie Reviews, TV Show Reviews, and Music Reviews. I will be actively adding more reviews from this point forward.

Updated Sessions Raycast Script Command, 2025 Edition

Raycast

Part of the Raycast series

I’ve spoken a couple times about my Sessions Raycast script command, both when I originally set it up, and then again when I updated it to use Moom as my window manager. Well, I’m back to using Raycast as my window manager, and I also use a Raycast focus mode for my podcast session. So here’s your updated Session Raycast script command, 2025 edition.

You may recall that the purpose of this script command is to just set up my environment in a way that’s suitable for a specific task. It always closes all apps, then opens some core apps, and then opens other apps specific to the session type.1 It also sets audio output and input based on the session type, arranges app windows in a specific way, and either sets Coffee for Raycast to Caffinate mode (keep the screen on) or Decaffinate mode (let it sleep when it wants).

Here’s is the current Sessions script command.

#!/bin/bash
# Required parameters:
# @raycast.schemaVersion 1
# @raycast.title Session
# @raycast.mode fullOutput
# Optional parameters:
# @raycast.icon ../icons/app-mode.png
# @raycast.argument1 { "type": "dropdown", "placeholder": "Session", "data": [ { "title": "Home", "value": "home" }, { "title": "IT", "value": "it" }, { "title": "Podcast", "value": "podcast" }, { "title": "Podcast Edit", "value": "podcastedit" } ] }
# @raycast.packageName Utils
# Documentation:
# @raycast.description Set up a workflow session
# @raycast.author scott_willsey
# @raycast.authorURL https://raycast.com/scott_willsey
open 'raycast://focus/complete'
open raycast://extensions/raycast/system/quit-all-applications
sleep 3
CORE=(1Password Messages Mail Safari AnyBox)
open 'raycast://extensions/mooxl/coffee/decaffeinate'
TYPE=$1
for value in "${CORE[@]}"
do
open -a "$value"
done
if [ "$TYPE" = 'home' ]; then
open 'raycast://script-commands/set-default-browser-safari'
open -a Warp
/opt/homebrew/bin/SwitchAudioSource -s "Studio Display Speakers"
/opt/homebrew/bin/SwitchAudioSource -s "Studio Display Microphone" -t "input"
open 'raycast://customWindowManagementCommand?&name=Home'
exit
fi
if [ "$TYPE" = 'it' ]; then
open 'raycast://script-commands/set-default-browser-chrome'
open -a "Google Chrome"
open -a Warp
open -a Slack
/opt/homebrew/bin/SwitchAudioSource -s "Studio Display Speakers"
/opt/homebrew/bin/SwitchAudioSource -s "Studio Display Microphone" -t "input"
open 'raycast://customWindowManagementCommand?&name=IT'
exit
fi
if [ "$TYPE" = 'podcast' ]; then
open 'raycast://script-commands/set-default-browser-safari'
open -a "Audio Hijack"
open -a Farrago
open -a Bear
open -a Notes
open -a "Microsoft Teams"
/opt/homebrew/bin/SwitchAudioSource -s "Elgato Wave XLR"
/opt/homebrew/bin/SwitchAudioSource -s "Shure Beta 87a & Farrago" -t "input"
open 'raycast://customWindowManagementCommand?&name=Podcasting'
open 'raycast://extensions/mooxl/coffee/caffeinate'
open 'raycast://focus/start?goal=Podcast&categories=podcast&mode=allow'
exit
fi
if [ "$TYPE" = 'podcastedit' ]; then
open 'raycast://script-commands/set-default-browser-safari'
open -a "Logic Pro"
open -a Bear
open -a Finder ~/Documents/Podcasts/FwB
/opt/homebrew/bin/SwitchAudioSource -s "Elgato Wave XLR"
open 'raycast://customWindowManagementCommand?&name=Edit%20Podcast'
open 'raycast://focus/start?goal=Podcast%20Edit&categories=podcastedit&mode=allow'
exit
fi

When I run the Sessions script command in Raycast, I just choose the session type. The nice thing about Raycast dropdowns is they pre-select the option you chose last time. In the case of the Session script command, that’s not as convenient as it is for some other script commands, but that’s life.

SessionsDropdown

Anyway, if you have any questions about this script command or Raycast script commands in general, you know how to find me (hint: Mastodon or Bluesky).

Footnotes

  1. I should see if there’s a way to close all apps that aren’t in a list, instead of closing all apps and then re-opening all apps that are in a list.

Creating Drafts in Astro 5

Astro

Part of the Astro series

Last week or so, I started writing a blog post as I sometimes do, this one pertaining to my Automation Workflow for Media Reviews. Unfortunately, I wanted to preview it as I went along, so I copied it in progress to my git main branch of the local copy of the website.

You can see where this is going.

Yes, I updated something else on the website and published it, INCLUDING the partial draft of the blog post I was working on. This wasn’t a super huge deal, except that I use EchoFeed to automatically post to Bluesky and Mastodon whenever I post something new on the site.

Sigh.

The good news is that this finally pushed me to add drafts functionality to my site, so that I could have drafts render when running locally in development mode, but not actually get written when doing a site build. It’s a good, basic feature to have.

Initially I started with the method shared by Alex Curtis in his post How to Create a Draft Post in Astro, but his filter didn’t actually work for me. I think this is because his example was for a different version of Astro, possibly. I wound up using the Astro Docs example for Filtering Content Collection Queries.

Basically, there are three steps to adding draft posts to Astro 5:

  1. Add an optional draft data property to your blog post collection in your content.config.ts, as below,
src/content.config.ts
const postCollection = defineCollection({
loader: glob({ pattern: "**/[^_]*.{md,mdx}", base: "./src/content/posts" }),
schema: ({ image }) =>
z.object({
title: z.string(),
description: z.string(),
link: z.string().optional(),
date: z
.string()
.transform((str) => new Date(str))
.optional()
.nullable(),
keywords: z.string().array(),
cover: image().optional(),
coverAlt: z.string().optional(),
series: z.string().optional(),
draft: z.boolean().optional(),
}),
});
  1. Filter based on this in any page that uses this content collection, the way the Astro Docs show,
src/pages/[page].astro
---
import Base from "../layouts/Base.astro";
import Post from "../components/Post.astro";
import Pager from "../components/Pager.astro";
import { getCollection } from "astro:content";
import site from "../data/site.json";
export async function getStaticPaths({ paginate }) {
let posts = await getCollection("posts", ({ data }) => {
return import.meta.env.PROD ? data.draft !== true : true;
});
posts = posts.sort(
(a, b) => new Date(b.data.date).valueOf() - new Date(a.data.date).valueOf(),
);
return paginate(posts, {
pageSize: site.posts.paginationSize,
});
}
const { page } = Astro.props;
const title = site.title;
const description = `Posts Page ${page.currentPage}`;
---
<Base title={title} description={description}>
<section aria-label="Post list" data-pagefind-ignore>
{
page.data.map((post, index) => {
return <Post post={post} />;
})
}
<Pager page={page} />
</section>
</Base>
  1. And finally, use it in a draft post!
---
title: Creating Drafts in Astro
description: describe
date: "2025-03-25T00:10:00-08:00"
keywords: ["keyword"]
draft: true
slug: "creating-drafts-in-astro"
---
I’ve always wanted to be a writer, and I’ve always wanted to create drafts in Astro that won’t get published until I want them to.
[Now I can!](https://jacurtis.com/notes/astro-draft-posts/)

One more thing though – none of this keeps the post page itself from being rendered during a build. It just keeps anything from linking to it or showing it in a list of posts. This means that it will show up in your RSS feed unless you edit your RSS template to also filter it out.

src/pages/rss.xml.js
import rss from "@astrojs/rss";
import sanitizeHtml from "sanitize-html";
import { rfc2822 } from "../components/utilities/DateFormat";
import { globalImageUrls } from "../components/utilities/StringFormat";
import site from "../data/site.json";
export function GET(context) {
const postImportResult = import.meta.glob("../content/posts/**/*.md", {
eager: true,
});
const posts = Object.values(postImportResult)
.filter((post) => post.frontmatter.draft !== true)
.sort(
(a, b) =>
new Date(b.frontmatter.date).valueOf() -
new Date(a.frontmatter.date).valueOf(),
);
return rss({
title: site.title,
description: site.description,
site: context.site,
xmlns: {
atom: "http://www.w3.org/2005/Atom/",
dc: "http://purl.org/dc/elements/1.1/",
content: "http://purl.org/rss/1.0/modules/content/",
},
items: posts.map((post) => ({
title: post.frontmatter.title,
link: `${site.url}${post.frontmatter.slug}`,
pubDate: rfc2822(post.frontmatter.date),
description: post.frontmatter.description,
customData: `<summary>${post.frontmatter.description}</summary>`,
content: globalImageUrls(
site.url,
sanitizeHtml(post.compiledContent(), {
allowedTags: sanitizeHtml.defaults.allowedTags.concat(["img"]),
}),
),
})),
});
}

That’s it! Hit me up on Bluesky or Mastodon if you have any questions.

Automation Workflow for Media Reviews

Mac

Part of the Mac series

Contents

Updating the Look of My /now Page

The now page is a cool concept by Derek Sivers for talking about what you’re doing now in your life. I use my /now page slightly differently, to just list things I’ve been reading, drinking, eating, watching, listening to, and sometimes playing recently.

Initially everything on my now page was bullet lists of text links. This works, and it’s fine, but I also was a little envious of how nice some other people’s media logs looked with artwork for their movies and shows. I’m thinking of Robb Knight’s Almanac, Habib Cham’s Books page, and Cory Dransfeldt’s Music, Watching, and Currently Reading pages, for example.

As I was rolling how to improve my /now page around in the back of my mind, Matt Birchler release a neat little iOS app called Quick Reviews. Quick Reviews is based on a web utility he’s had for awhile now which lets you create little images with media reviews, complete with artwork, and share them with people.

As I played with the iOS app version of Quick Reviews and saw how it exported the images, I realized I could build a workflow around them for incorporating them into my /now page.1 Most importantly for any change to the page to make it look better was that it not take much more time to update on a regular basis. The text link version was easy – I have scripts that pull my Trakt history for the Movies and TV Shows portions of the page and return formatted markdown bullet list items. All I have to do is paste them where I want them.

Thanks to the power of scripting and the wonderful array of automation utilities for macOS, I’ve been able to find a way to make updating my /now page with Quick Reviews images equally quick and painless. So get ready, because the rest of this article exists to show you how it’s done.

By the way, I made a handy little video of the automation workflow in action which you can watch here or on YouTube, or jump to the detailed workflow description below to keep reading.

Automating Quick Reviews

Step 1: Quick Reviews

The first link in my automation chain is Quick Reviews by Matt Birchler. This is an iOS app that lets you choose a piece of media to review and allows you to add artwork, your rating, and your text review to a nicely formatted image.

First you choose a Media Type (Movie, TV Show, Game, Book, Music, or Other), give it the title of the media, and either add artwork or use Magic Mode to add one if it’s a Movie, TV Show, or Game, and then add your text review and choose a rating. You can customize the text that gets used for ratings to your own liking ( use emoji stars if you want!) at an app level, by the way.

Next, edit the image colors for background and rating highlight, edit the font and font styles, and save or export the image. In my case, I use export and share and then AirDrop it to my Mac.

WritingAReview

Step 2: Hazel

When I AirDrop the Quick Reviews image to my Mac, several things happen automatically, thanks in large part to a wonderful Mac utility called Hazel, by Noodlesoft. Hazel does one thing: it watches folders that you tell it to. But then it allows you to perform a ton of different actions on those folders, or files in those folders, based on triggers you specifiy.

HazelPicturesWorkflow

In my case, I have Hazel watch my Downloads folder for incoming images that have the png extension and that have file names containing “PNG image”. These criteria are based on the exported Quick Reviews file name and type. If any files meeting those criteria hit the Downloads folder, they’re moved to /Users/scott/Scripts/scottwillsey/scottwillsey-images/In, renamed in a specific way, and then a Bash script called scottwillsey-media-review-images.sh is run.

Here’s the entirety of that script:

scottwillsey-media-review-images.sh
#!/bin/bash
# Required parameters:
# @raycast.schemaVersion 1
# @raycast.title ScottWillsey Media Review Images
# @raycast.mode silent
# Optional parameters:
# @raycast.icon ../icons/scottwillsey.png
# @raycast.packageName Website
# Documentation:
# @raycast.description Resizes and copies images for the ScottWillsey site's /now page media reviews.
# @raycast.author scott_willsey
# @raycast.authorURL https://raycast.com/scott_willsey
/Applications/Retrobatch.app/Contents/MacOS/Retrobatch --workflow /Users/scott/Scripts/scottwillsey/scottwillsey-images/BlogMovieReviewImageResizeWithText.retrobatch &
wait
/Users/scott/Scripts/scottwillsey/scottwillsey-images/scottwillsey_review_images.py

You can see that it’s written as a Raycast script command, which is why all the weird comments are in there, but I don’t run it from Raycast because I let Hazel call it directly when I AirDrop the image.

First, it runs a Mac app called Retrobatch with a specific Retrobatch workflow, waits for that to finish, and then runs a Python script.

Which takes us to Step 3…

Step 3: Retrobatch

Retrobatch is a stupendously amazing image automation tool from Flying Meat Software. Its whole job is image batch processing.

The particular Retrobatch workflow that I run in this case takes images from /Users/scott/Scripts/scottwillsey/scottwillsey-images/In, where Hazel moved them, and creates two copies – one resized to 1000px wide and saved as a png, and the other full-sized but saved as a jpg. Both of these are saved to /Users/scott/Scripts/scottwillsey/scottwillsey-images/Out.

The purpose of having two copies is so that I can display an optimized version in my articles and have those link to the full-sized image if the reader clicks on them – more on this later.

RetrobatchWorkflow

One other important thing this Retrobatch workflow does is to extract the text from the resized image and save that text in a file in the Out folder with the same file name as the corresponding image, except with a .txt extension.

I need the text from the image to create my image link alt text for the website, and the fact that Retrobatch can extract this for me makes the fully automated nature of this process possible. I don’t have to make a copy of the text while typing my review on the phone and send it to the Mac somehow. It’s literally delivered in the image and extracted on the Mac by Retrobatch. It’s beautiful.

Now Retrobatch is done. For each review image in /Users/scott/Scripts/scottwillsey/scottwillsey-images/In, there is now a corresponding resized png, full-sized jpg, and text file with the text from the image saved in /Users/scott/Scripts/scottwillsey/scottwillsey-images/Out.

When Retrobatch finishes, the scottwillsey-media-review-images.sh script that started the Retrobatch workflow moves on to the next step – running a Python script called scottwillsey_review_images.py.

Step 4: Python Script

The Python script is much more involved than the Bash script was. I’m just going to show you the whole thing and then walk though it.

scottwillsey_review_images.py
#!/Users/scott/Scripts/python/venv/bin/python
import os
import shutil
import pyperclip
from pathlib import Path
import uuid
# Set paths
output_path = "/Users/scott/Scripts/scottwillsey/scottwillsey-images/Out"
input_path = "/Users/scott/Scripts/scottwillsey/scottwillsey-images/In"
review_images_path = "/Users/scott/Documents/Sites/ScottWillsey/Reviews"
public_image_path = "/Users/scott/Sites/scottwillsey/public/images/posts"
assets_image_path = "/Users/scott/Sites/scottwillsey/src/assets/images/posts"
# Initialize empty string for file links
file_links = ""
# Generate new UUID
new_uuid = str(uuid.uuid4())
# Loop through files in output directory
for file in os.listdir(output_path):
alt_text = "Review Image"
if file.endswith(".png"):
# Get base name without extension
base_name = os.path.splitext(file)[0]
# create UUID name
uuid_name = base_name + "-" + new_uuid
# Look for corresponding text file
txt_file = os.path.join(output_path, f"{base_name}.png.txt")
if os.path.exists(txt_file):
with open(txt_file, "r") as f:
alt_text = f.read().strip().replace("\n", " ")
# Create markdown link
file_links += f"[![{alt_text}](../../assets/images/posts/{uuid_name}.png)](/images/posts/{uuid_name}.jpg)\n"
# Copy to clipboard
pyperclip.copy(file_links)
# Copy files to site public images directory and move files from input to review images directory
for file in os.listdir(input_path):
# Get base name and extension
base_name, ext = os.path.splitext(file)
# Create new filename with UUID
new_filename = f"{base_name}-{new_uuid}{ext}"
src_file = os.path.join(input_path, file)
# Move file to review images directory
dst_file = os.path.join(review_images_path, file)
shutil.move(src_file, dst_file)
# Delete all .txt files in output directory
for file in os.listdir(output_path):
if file.endswith(".txt"):
os.remove(os.path.join(output_path, file))
# Move files from output to site src assets images directory
for file in os.listdir(output_path):
# Get base name and extension
base_name, ext = os.path.splitext(file)
# Create new filename with UUID
new_filename = f"{base_name}-{new_uuid}{ext}"
src_file = os.path.join(output_path, file)
if ext.lower() == '.png':
dst_file = os.path.join(assets_image_path, new_filename)
shutil.move(src_file, dst_file)
elif ext.lower() == '.jpg':
dst_file = os.path.join(public_image_path, new_filename)
shutil.move(src_file, dst_file)

The gist of the script is simple, although it does several things:

# Set paths
output_path = "/Users/scott/Scripts/scottwillsey/scottwillsey-images/Out"
input_path = "/Users/scott/Scripts/scottwillsey/scottwillsey-images/In"
review_images_path = "/Users/scott/Documents/Sites/ScottWillsey/Reviews"
public_image_path = "/Users/scott/Sites/scottwillsey/public/images/posts"
assets_image_path = "/Users/scott/Sites/scottwillsey/src/assets/images/posts"
# Initialize empty string for file links
file_links = ""
# Generate new UUID
new_uuid = str(uuid.uuid4())
# Loop through files in output directory
for file in os.listdir(output_path):
alt_text = "Review Image"
if file.endswith(".png"):
# Get base name without extension
base_name = os.path.splitext(file)[0]
# create UUID name
uuid_name = base_name + "-" + new_uuid
# Look for corresponding text file
txt_file = os.path.join(output_path, f"{base_name}.png.txt")
if os.path.exists(txt_file):
with open(txt_file, "r") as f:
alt_text = f.read().strip().replace("\n", " ")
# Create markdown link
file_links += f"[![{alt_text}](../../assets/images/posts/{uuid_name}.png)](/images/posts/{uuid_name}.jpg)\n"
# Copy to clipboard
pyperclip.copy(file_links)

This creates a UUID to add into the image file names. Then it looks at every file in the Retrobatch output directory (/Users/scott/Scripts/scottwillsey/scottwillsey-images/Out), checks to see if it’s a png, and gets its base file name if so.

After getting the png file base name, the script looks for a corresponding txt file, which will contain the extracted review text. If it finds one, it reads the contents of the file to use as image alt text.

Next, it creates a markdown link for the png image complete with alt text, and a link to the larger jpg image. Then it appends this markdown link to the file_links variable.

Finally, it copies the complete file_links variable to the clipboard. This results in markdown for the images ready to paste into the /now page. The markdown basically looks like this, albeit with a few extra bits of text from the extracted text that I clean up:

[![Severance S1&2 • 2022 • Man, this thing is weird and wild but also so much deeper than you might think at first. One of the best shows ever. • Loved It!](../../assets/images/posts/PngImage42999A84080-review-6126286a-a138-4551-8892-ad278b0bfa5f.png)](/images/posts/PngImage42999A84080-review-6126286a-a138-4551-8892-ad278b0bfa5f.jpg)

Python Script Part 2: Archive All Images in the Retrobatch Input Folder

# Copy files to site public images directory and move files from input to review images directory
for file in os.listdir(input_path):
# Get base name and extension
base_name, ext = os.path.splitext(file)
# Create new filename with UUID
new_filename = f"{base_name}-{new_uuid}{ext}"
src_file = os.path.join(input_path, file)
# Move file to review images directory
dst_file = os.path.join(review_images_path, file)
shutil.move(src_file, dst_file)

This part is simple. It just renames the images with the UUID included in the file name and moves them from the Retrobatch input folder to a review images archive folder. This is just a folder for me to keep copies of my reviews in, it has nothing to do with generating the website.

Python Script Part 3: Delete All Text Files in the Retrobatch Output Folder

# Delete all .txt files in output directory
for file in os.listdir(output_path):
if file.endswith(".txt"):
os.remove(os.path.join(output_path, file))

Again, very simple. This just deletes all the text files in the Retrobatch output folder. I don’t need them anymore as I have the text in the markdown links.

Python Script Part 4: Rename and Move the Image Files to Use on the Website

# Move files from output to site src assets images directory
for file in os.listdir(output_path):
# Get base name and extension
base_name, ext = os.path.splitext(file)
# Create new filename with UUID
new_filename = f"{base_name}-{new_uuid}{ext}"
src_file = os.path.join(output_path, file)
if ext.lower() == '.png':
dst_file = os.path.join(assets_image_path, new_filename)
shutil.move(src_file, dst_file)
elif ext.lower() == '.jpg':
dst_file = os.path.join(public_image_path, new_filename)
shutil.move(src_file, dst_file)

This loops through all files in the Retrobatch output folder (since only images remain here) and does the following with each:

  • Gets the base file name and file extension,
  • Creates a new filename with the UUID added in between the base file name and extension,
  • Moves the image to /Users/scott/Sites/scottwillsey/src/assets/images/posts with the new UUID-included file name if it’s a png file,
  • Or moves the image to /Users/scott/Sites/scottwillsey/public/images/posts with the new UUID-included file name if it’s a jpg.

That’s the entirety of the script. Once it’s done, there are no images left in the Retrobatch input and output folders, there are jpg and png images in the correct website source folders for use on the compiled /now page, and there are corresponding Markdown links in the clipboard ready to paste into the /now page content source Markdown file.

Step 5: Profit

The results are something like the image below, or actually what you see if you click the link for the /now page on the site menu. It’s the one that looks like a clock.

ReviewsAutomationResult

There are definitely improvements that could be made, but this is a great start, and it’s proof that sometimes things that look like a very manual process can be automated, thanks largely to the great ecosystem of Mac indie programmers who make outstanding irreplaceable utilities like Hazel and Retrobatch.

If you want to see a comparison of the old style, imageless /now page and the current beautiful one, just look at the image below. On the right is the new, Quick Reviews-ified version, on the left is the old one. Or, to put it another way, on the left is the before now, and on the right is the now now.

NowOldAndNew

Summarium

That’s a lot of words, so let’s summarize the process here:

  1. Scott Willsey creates a review in Quick Reviews on iOS,
  2. Scott Willsey AirDrops it to his Mac,
  3. Hazel renames and moves the image and runs a Bash script,
  4. Bash script runs a Retrobatch workflow and waits for it to finish,
  5. Retrobatch makes two copies of the image, a full-sized jpg and a smaller png, as well as a text file containing the extracted text from the image,
  6. Bash script runs a Python script,
  7. Python script constructs the Markdown links and moves the Retrobatch images into the correct locations for the site to compile and display on the /now page,
  8. Scott Willsey pastes the Markdown links into the /now page source Markdown file,
  9. Scott Willsey says “Damn, that is cool!” and goes to make another cup of coffee.

Footnotes

  1. If you’re asking yourself right now “Is this guy going to make EVERY reference to his /now page a hyperlink to it?”, the answer is yes. Yes, I am.