sc
Scott avatar
ttwillsey

Updated Sessions Raycast Script Command

Raycast

Part of the Raycast series

As you may know, I created a Raycast script command to trigger what I call “sessions”, which are really just setting up the Mac to perform different tasks, such as podcasting or “normal” general use. At the time, I was using Raycast for window management, so my script command referenced Raycast window management layouts. Now I’m using Moom for window management, so I needed to update it to call my Moom layouts instead.

In the process, I decided to steal even more from Robb Knight’s App Mode Raycast script command and use his method of killing all apps before activating a session and having an array of default apps that I always want open. It’s almost the same exact script now.

The updated version looks like this:

#!/bin/bash
# Required parameters:
# @raycast.schemaVersion 1
# @raycast.title Session
# @raycast.mode fullOutput
# Optional parameters:
# @raycast.icon /Users/scott/Scripts/Raycast/icons/app-mode.png
# @raycast.argument1 { "type": "dropdown", "placeholder": "Session", "data": [ { "title": "Home", "value": "home" }, { "title": "IT", "value": "it" }, { "title": "Podcast", "value": "podcast" }, { "title": "Podcast Edit", "value": "podcastedit" } ] }
# @raycast.packageName Utils
# Documentation:
# @raycast.description Set up a workflow session
# @raycast.author scott_willsey
# @raycast.authorURL https://raycast.com/scott_willsey
open raycast://extensions/raycast/system/quit-all-applications
sleep 3
CORE=(1Password Messages Mail Safari)
TYPE=$1
for value in "${CORE[@]}"
do
open -a "$value"
done
if [ "$TYPE" = 'home' ]; then
open 'raycast://script-commands/set-default-browser-safari'
sleep 2
open -a Warp
/opt/homebrew/bin/SwitchAudioSource -s "Studio Display Speakers"
/opt/homebrew/bin/SwitchAudioSource -s "Studio Display Microphone" -t "input"
osascript ~/Scripts/applescript/apply_moom_layout.scpt "Home"
exit
fi
if [ "$TYPE" = 'it' ]; then
open 'raycast://script-commands/set-default-browser-chrome'
sleep 2
open -a "Google Chrome"
open -a Warp
open -a Slack
/opt/homebrew/bin/SwitchAudioSource -s "Studio Display Speakers"
/opt/homebrew/bin/SwitchAudioSource -s "Studio Display Microphone" -t "input"
osascript ~/Scripts/applescript/apply_moom_layout.scpt "IT"
exit
fi
if [ "$TYPE" = 'podcast' ]; then
open 'raycast://script-commands/set-default-browser-safari'
sleep 2
open -a "Audio Hijack"
open -a Farrago
open -a Bear
open -a FaceTime
/opt/homebrew/bin/SwitchAudioSource -s "Elgato Wave XLR"
/opt/homebrew/bin/SwitchAudioSource -s "Elgato Wave XLR" -t "input"
osascript ~/Scripts/applescript/apply_moom_layout.scpt "Podcast"
exit
fi
if [ "$TYPE" = 'podcastedit' ]; then
open 'raycast://script-commands/set-default-browser-safari'
sleep 2
open -a "Logic Pro X"
open -a Finder ~/Documents/Podcasts/FwB
/opt/homebrew/bin/SwitchAudioSource -s "Elgato Wave XLR"
osascript ~/Scripts/applescript/apply_moom_layout.scpt "PodcastEdit"
exit
fi

You can see that because my so-called “IT” session sets Chrome as the default browser, I’m compelled to set Safari as my default browser for every other session type. I supposed I could just move that out to the top and have it called no matter what, and then immediately reversed if the session type is “IT”. You can read about my default browser script commands here.

The reason for the sleep commands is that without them, running the Session script command would result in some timing issue in which only some of the apps would open, but not all of them. I was able to solve the problem by putting in a sleep after killing all applications and after setting the default browser.

Calling Moom layouts using AppleScript commands (osascript) is possible because the author of Moom supports AppleScript calls to the program. I wish more programmers would think of their apps as potential links in a workflow chain like this.

Moom 4

Mac

Part of the Mac series

You may have noticed, as you’ve wandered the site, that I’m a bit of a Raycast fan. You may, therefore, imagine that I use the Raycast window management tools to move and arrange windows . And it’s true. Well, it was true. Then Many Tricks released Moom 4, the update to their popular Moom window manager app.

Historically, I’m a window manager serial monogamist. Like any other type of utility, there’s a wide range of approaches to this task in the available third-party apps. Some let you draw your window size and location on the screen, some have keyboard shortcuts for instantly popping things into specific locations, some let you save multi-window layouts, and many various combinations of these things. Raycast does two of these things. Moom does all of these things.

At first glance, you might think Moom is a slower way to manage windows. One of its signature controls is the Keyboard Controller, which you activate via a hotkey (in my case ^⌥⌘Space). Once it’s onscreen, you can use the keys indicated on its cheat sheet to move and/or resize your focused window per the options available.

Moom Keyboard Controller

You can also customize the options that appear at the top of that list, and the bottom section will always show any actions you’ve assigned single-key hotkeys for. For example, based on my arrangement above, pop open the Keyboard Controller with ^⌥⌘Space and then hit 3 and you’ll move the foreground window to the left third size of the screen.

Another great way that Moom allows for window control is something I think may be unique to it – a palette of window management buttons that shows up when you hover over one of the stoplight buttons in the window’s top left corner. You can specify in Moom’s settings which of the three buttons to hover over to get the palette, as well as which window commands are available.

Moom Palette Menu

These are all great and dandy, but Moom has freeform sizing and moving options too. You can configure Moom hotkeys that allow you to glide the cursor over a window to move it or to resize it. These options sure beat trying to find an empty spot to click on a title bar filled with tabs, or to find the little grab handles at the edge of the window. Apple really hates onscreen controls and the native resizing controls can be slow to manipulate.

And then there’s the grand champion of all freeform resizing, the Moom grid. It allows you to quickly draw out a window size to fit anywhere on a grid. You can set the number of rows and columns in the grid, which gives you quite a bit of range in window size granularity.

Here’s what it looks like to drag out the shape you want to resize your window to on the grid:

Moom Grid Resize

And here’s the resulting resized Safari window.

Moom Post Resize

Despite all these great window management options, sometimes good old fashioned hotkey controlled instant resizing and moving is still the best way to quickly tile windows. Moom allows for that. You can create custom window sizes and locations and assign hotkeys to them. You can also make window layouts with multiple windows that can be applied instantly. Window layouts can be set to apply to specific apps only, or to whatever windows are handy when you apply the layout.

You can see some of my window settings complete with hotkeys in my Moom settings window below.

Moom Size Position Settings

And in a bit of good news for me, Moom commands can be activated via AppleScript. Thanks to this, I modified my Sessions Raycast script command to accommodate setting window layouts using Moom instead of using Raycast’s window management tools. I’ll show you that in a future post.

I do like the window management commands in Raycast, and there’s always an argument for having one less application running in memory. But Moom is more versatile (and frankly more fun to use), and Raycast lets me disable its window management, and so Moom 4 seems like a no-brainer to me.

At least for now.

The Green Economy Is Hungry for Copper-and People Are Stealing, Fighting, and Dying to Feed It

This is not a normal topic for me on this site, but since I blather on about technology nonstop and make my living thanks to it, it’s important to highlight the very real downsides it brings. For example, the current push towards electrification of everything is ramping the planet’s need for copper, and copper means exploitation, death, and environmental disaster to people in many parts of the world.

If this doesn’t sound very interesting, the human element of the story is intriguing. Just the part about Robert “Toxic Bob” Friedland illustrates how wild this story really is:

By the early 1980s Friedland had teamed up with some Vancouver-based financiers and moved into the world of mining, hustling for small gold outfits. He made headlines in 1992 when a Colorado gold mine he had previously overseen (as its parent company’s CEO) leaked toxic heavy metals into a nearby watershed, earning him the nickname “Toxic Bob.” In the meantime he had also discovered a major gold deposit in Alaska and an even bigger nickel deposit in Canada, which he later sold for more than $3 billion. Friedland has been a major player in the industry ever since. (He also has a sideline in movies, helping to produce Crazy Rich Asians and other films. Another fun Friedland fact: This summer, he bought a scenic California estate from Ellen DeGeneres for a trifling $96 million.)

Interestingly, at one point this guy ran the commune Steve Jobs lived on in Oregon in the ’70’s. Steve eventually left, disillusioned with what he saw as Toxic Bob’s materialism. Not to put too fine a point on it, but Toxic Bob was far from the only hippy idealist who transformed into an uber capitalist, convincing themselves in the process that it was for the good of humanity and not just their own ballooning bank accounts.

Linked post: The Green Economy Is Hungry for Copper—and People Are Stealing, Fighting, and Dying to Feed It | WIRED

Automating Sessions With Raycast Script Commands

Raycast

Part of the Raycast series

In the past, I used a menubar utility called Bunch to start and stop my podcast session setup. But this was before I started using Raycast, and now that I already use Raycast to run lots of scripts and automations, I decided to do this with Raycast too.

I took inspiration from Robb Knight’s App Mode Raycast script command and created one called Sessions. Like Robb’s, it uses a dropdown to choose what “session” I want to run. It’s a bit of a weird name, I guess, because I have one called “Stop Podcasting”, which doesn’t really seem like a session, but more like a lack of a session.

Raycast Sessions Script Command

When I run the Sessions script command, I currently have two choices: Podcasting or Stop Podcasting.

The Podcasting option runs a Raycast Window Management Command which opens specific apps (Audio Hijack, Farrago, Safari, Bear and FaceTime) and puts their windows in specific locations on the screen using a preset Window Layout.

This is what the Raycast Window Layout Command looks like. The apps are Audio Hijack (top left), Farrago (bottom left), Bear (center), FaceTime (top right), and Safari (right half).

Podcast Session Window Layout Command

The script command also sets the audio output to my Elgato Wave XLR, which has my podcasting headphones plugged into it, and sets the audio input to a Loopback audio device that combines my podcasting mic and Farrago soundboard into one input device. Finally, it starts an Amphetamine session, which keeps the display from sleeping if I don’t touch the mouse or keyboard for awhile while podcasting, and toggles my desk lamps on using a Shortcuts shortcut.

Here’s what it looks like on my Apple Studio Display after running the Sessions script command:

Podcast Session Window Layout in Action

The Stop Podcasting option sets the audio output and input to my Studio Display’s speakers and mic, closes Audio Hijack, Farrago, Bear, and FaceTime, centers Safari on the screen again, and stops the Amphetamine session. It also toggles the desk lamps.

Here’s the full script command:

#!/bin/bash
# Required parameters:
# @raycast.schemaVersion 1
# @raycast.title Sessions
# @raycast.mode silent
# Optional parameters:
# @raycast.icon ../icons/app-mode.png
# @raycast.argument1 { "type": "dropdown", "placeholder": "Choose Mode", "data": [ { "title": "Podcasting", "value": "podcasting" }, {"title": "Stop Podcasting", "value": "stopp"} ] }
# @raycast.packageName Utils
# Documentation:
# @raycast.description Set apps and devices for specific work session types
# @raycast.author scott_willsey
# @raycast.authorURL https://raycast.com/scott_willsey
TYPE=$1
if [ "$TYPE" = 'podcasting' ]; then
/opt/homebrew/bin/SwitchAudioSource -s "Elgato Wave XLR"
/opt/homebrew/bin/SwitchAudioSource -s "Shure Beta 87a & Farrago" -t "input"
open raycast://customWindowManagementCommand?name=Podcasting
shortcuts run "Scott Desk Lamps Toggle"
osascript -e 'tell application "Amphetamine" to start new session with options {duration:3, interval:hours, displaySleepAllowed:false}'
exit
fi
if [ "$TYPE" = 'stopp' ]; then
/opt/homebrew/bin/SwitchAudioSource -s "Studio Display Speakers"
/opt/homebrew/bin/SwitchAudioSource -s "Studio Display Microphone" -t "input"
osascript -e 'quit app "Farrago"'
osascript -e 'quit app "Bear"'
osascript -e 'quit app "Audio Hijack"'
osascript -e 'quit app "FaceTime"'
shortcuts run "Scott Desk Lamps Toggle"
open raycast://customWindowManagementCommand?name=Safari%20Center
osascript -e 'tell application "Amphetamine" to end session'
exit
fi

Raycast script commands can be written in bash script, Apple Script, Swift, Python, Ruby, or JavaScript (Node.js). This one is a bash script, and the Podcasting option very simply uses bash commands to run a bunch of other utilities: SwitchAudioSource, to set audio output and input, a Raycast custom window management command to open my podcast session apps and place their windows per a custom layout, a shortcut to toggle my desk lamps, and finally an inline Apple Script (osascript) to start an Amphetamine app session so the display can’t sleep.

The Stop Podcasting option runs similar commands plus several Apple Script calls to close the apps that were opened by the Raycast custom window layout in the Podcasting option.

Script commands are both a great reason to use Raycast and a great tool for automation if you already do use Raycast.

Pseudo-Automating the Listened to Podcasts List on My /Now Page

As you know, I have a /now page that I update on occasion to let anyone who cares know what kinds of things I’m watching, reading, and eating at some random point in my life. So far, it’s been a very manual update process because I haven’t had time to start automating any of it until now.

I’ve taken inspiration from Robb Knight’s video Using Eleventy to Gobble Up Everything I Do Online, particularly for the Overcast part of the automation process. I watched enough of the video to see Robb mention the extended version of the Overcast OPML file you can download from your Overcast account that includes episode history and decided to write a script that would automate downloading and parsing it for me.

Enter overcast-history, my python script for checking to see when I last downloaded the OPML file, getting a new copy if needed, and parsing it if a new copy was downloaded (or if I passed it the -f flag to force it to parse the local OPML file anyway).

You might be thinking “hold on here, Robb also wrote a Python script, don’t act like you’re inventing the wheel!”, and that’s a fair point. I actually thought he was manually downloading his OPML file until I finished the video today (after writing my own Python script). Now I realize he’s at a high level of automation on this task.

Another key difference between Robb’s approach and mine so far, besides the fact that our Python scripts are completely different1, is that I believe he creates a JSON file with it and consumes that as part of his site build process to completely automatically update his listen history.

In contrast to Robb, I’m not very automated with my /now page yet. This python script is part of a collection of tools for quickly automating certain aspects of updating my site, which I build locally and ftp to my server. I haven’t decided yet how much I want to automate the build process again.

Therefore, with the understanding that this is ONLY an example of how to grab and parse information off the internet, and with the understanding that my Python coding skills are shaky at best, here’s my approach to getting recently listened to podcast episodes from my Overcast history into a Markdown list.

overcast-history

You’ll see immediately that I’m a terrible Python programmer and that I have no idea what Python best practices are yet. I have 6 files to do this one simple task:

  • constants.py (purpose of which should be self-evident)
  • session.py (used to keep the overcast login active across modules)
  • main.py (entry point script that gets run directly to make it all happen)
  • oc_login.py (logs in to my Overcast account)
  • oc_history.py (handles downloading the extended OPML file from my Overcast account)
  • oc_opml_parse.py (parses the OPML file and gives me the recent list of podcast episodes I want)
constants.py
ACCOUNT_URL = 'https://overcast.fm/account'
ACCOUNT_PATH = '/account'
LOGIN_URL = 'https://overcast.fm/login?then=account'
EMAIL = 'xxxxxxxxxx@gmail.com'
PASSWORD = 'xxxxxxxxxxxxxxxxxxxxxxxxxxx'
LOGIN_PATH = '/login'
OPML_AGE_LIMIT_DAYS = 2
OPML_LINK = 'https://overcast.fm/account/export_opml/extended'
SUCCESS = 200
TOO_MANY_REQUESTS = 429
OPML_FILE_PATH = 'overcast_history.opml'
NUMBER_OF_EPISODES = 10

Right away I’ve made you cry. Yes, I have my Overcast account password in my constants file. THIS WILL BE REMEDIED SOON! I plan to use keyring to fix this issue. Maybe. Probably.

session.py
import requests
session = requests.Session()

This one creates a requests session object which can then be imported into any other modules that need to use requests to grab stuff. That’s it. There’s probably a way better way to do this that I should know about.

main.py
#!/Users/scott/Scripts/python/venv/bin/python
import argparse
import os
from datetime import datetime, timedelta
import constants as const
from oc_history import load_oc_history
from oc_opml_parse import oc_opml_parse
p = argparse.ArgumentParser()
p.add_argument('-f', '--force', action='store_true', help='Force local OPML file parsing')
args = p.parse_args()
def file_is_old(file_path):
if not os.path.exists(file_path):
return True
file_mod_date = os.path.getmtime(file_path)
display_date = datetime.fromtimestamp(file_mod_date)
print(f'OPML file created on {display_date.strftime("%Y-%m-%d")}')
file_datetime = datetime.fromtimestamp(file_mod_date)
print(f'file_datetime = {file_datetime}')
stale_date = datetime.now() - timedelta(days=const.OPML_AGE_LIMIT_DAYS)
print(f'stale_date = {stale_date}')
return file_datetime < stale_date
def main():
history_was_loaded = False
if file_is_old(const.OPML_FILE_PATH):
print(f'OPML file is older than {const.OPML_AGE_LIMIT_DAYS} days or doesn\'t exist. Downloading new data...')
history_was_loaded = load_oc_history()
else:
print(f'OPML file is less than {const.OPML_AGE_LIMIT_DAYS} days old. Skipping download.')
if history_was_loaded or args.force:
print('Parsing OPML file...')
if oc_opml_parse():
print('Done!')
else:
print('You have to update your podcast list manually, dude.')
else:
print('No new Overcast history generated.')
if __name__ == "__main__":
main()

I run main.py as the script entry point and it gets all the work going. It checks to see if the date of my copy of the OPML file is older than the value in the OPML_AGE_LIMIT_DAYS constant and redownloads it if so, using the load_oc_history() function from oc_history.py.

If a new OPML file was downloaded OR I ran main.py with the -f flag, then it parses the OPML file by running the oc_opml_parse() function in oc_opml_parse.py.

oc_login.py
import os
import constants as const
from session import session
def oc_login():
if oc_test_login():
return True
else:
return False
def oc_enter_login():
print('Attempting login')
r = session.post(const.LOGIN_URL, data={'email': const.EMAIL, 'password': const.PASSWORD})
print(f"Response {r.status_code}")
if r.status_code == const.SUCCESS:
print("Successfully logged in")
return True
else:
print("Failed login attempt")
return False
def oc_test_login():
print('Testing login status')
r = session.get(const.ACCOUNT_URL)
if const.ACCOUNT_PATH in r.url:
print('Already logged in')
return True
elif const.LOGIN_PATH in r.url:
print('Login required')
if oc_enter_login():
return True
else:
print(f"I have no idea what happened\n{r.url}")
return False

Right now this doesn’t make sense, but if I actually store auth tokens somewhere later, maybe it will. Right now it always checks to see if I’m logged in or not by checking to see if I stayed on the /account page or got bounced back to the /login page. If I got bounced back, it logs me in.

The reason it doesn’t make sense is I don’t persist any login tokens across script runs, so if I need to download an OPML file, it’s always going to need to log into my Overcast account. I may just keep that workflow and simplify this script to not even check instead, and just admit it’s going login to the account every time.

oc_history.py
import os
import constants as const
from session import session
from oc_login import oc_login
def load_oc_history():
if not oc_login():
print("Couldn't log in to Overcast.fm account!")
return False
print("Loading history...")
r = session.get(const.OPML_LINK)
print(f"Response {r.status_code}")
match r.status_code:
case const.SUCCESS:
print('OPML file downloaded')
file_path = 'overcast_history.opml'
try:
with open(file_path, 'w', encoding='utf-8') as file:
file.write(r.text)
print(f'OPML file saved to {os.path.abspath(file_path)}')
return True
except IOError as e:
print(f'Error saving OPML file: {e}')
case const.TOO_MANY_REQUESTS:
print(r.headers)
print(f'Too many requests - Retry-After = {r.headers.get('Retry-After')}')
case _:
print(f'Unexpected status code: {r.status_code}')
return False

This is pretty simple. I download the OPML file and it either downloads ok or it doesn’t. It’s funny that I have the file name hardcoded here but I use constants for everything else. I’ll have to fix that.

oc_opml_parse.py
import pyperclip
import xml.etree.ElementTree as ET
import constants as const
from datetime import datetime, timezone, timedelta
def find_podcast_name(root, episode_id):
for podcast in root.findall(".//outline[@type='rss']"):
for ep in podcast.findall("outline[@type='podcast-episode']"):
if ep.get('overcastId') == episode_id:
return podcast.get('text')
return "Unknown"
def oc_opml_parse():
with open(const.OPML_FILE_PATH, 'r') as f:
content = f.read()
try:
with open(const.OPML_FILE_PATH, 'r') as f:
content = f.read()
except FileNotFoundError:
print(f"File not found: {const.OPML_FILE_PATH}")
return None
root = ET.fromstring(content)
# Find all podcast episode entries
episodes = root.findall(".//outline[@type='podcast-episode']")
current_date = datetime.now(timezone.utc)
# Filter episodes with played="1"
# played_episodes = [ep for ep in episodes if ep.get('played') == '1']
played_episodes = [
ep for ep in episodes
if ep.get('played') == '1' and
(current_date - datetime.strptime(ep.get('userUpdatedDate'), "%Y-%m-%dT%H:%M:%S%z")).days <= (const.OPML_AGE_LIMIT_DAYS + 1)
]
# Sort episodes by userUpdatedDate, most recent first
played_episodes.sort(key=lambda ep: datetime.strptime(ep.get('userUpdatedDate'), "%Y-%m-%dT%H:%M:%S%z"), reverse=True)
# Get the most recent episodes
top_episodes = played_episodes[:const.NUMBER_OF_EPISODES]
# Print the results
episodes_list = ""
for ep in top_episodes:
episodes_list += f"- [{find_podcast_name(root, ep.get('overcastId'))}{ep.get('title')}]({ep.get('overcastUrl')})\n"
# print(f"Title: {ep.get('title')}")
# print(f"Updated Date: {ep.get('userUpdatedDate')}")
# print(f"URL: {ep.get('url')}")
# print(f"Overcast URL: {ep.get('overcastUrl')}")
# podcast_name = find_podcast_name(root, ep.get('overcastId'))
# print(f"Podcast: {podcast_name}")
# print("---")
print(episodes_list)
pyperclip.copy(episodes_list)
return True

This is the longest one and probably the one where my meager Pythoning probably should embarrass me the most. This parses the OPML file as XML and grabs information about any podcast episodes newer than a certain date (hint: the value of OPML_AGE_LIMIT_DAYS plus 1 day) and then sorts them by the userUpdatedDate value from that episode’s data. After that, it’s just creating a Markdown list of the episodes that match the date and listened to criteria, and copying that list to the clipboard using pyperclip.

I have a Raycast Script Command I can run this from, but obviously in the future it would be better to integrate it more into the site build process itself.

I assume you’re a Python genius compared to me, so please let me know if you have any improvement suggestions beyond the ones I’ve already mentioned.

Footnotes

  1. I haven’t looked at his yet, but I assume they are different since I assume he’s a much better Python programmer than I am!

Raycast Script Command for Image Link Transformation

Raycast

Part of the Raycast series

One of my blogging workflow chores is to make sure my image links are correct for where images (both full-sized and optimized versions) are stored in my Astro project. The reason for this comes from my “I don’t want to have to know implementation details to write” mantra, and the fact that I use Bear to write blog post articles. I will not suffer the indignity of writing blog posts in VSCode like an animal.1

Bear is nice for inserting images into articles – just drag and drop. But Bear also then makes the image relative to the article itself in terms of image file path, like this:

![](AstrosizeBlogImage-187290FD-ED76-4674-ABE4-AD411F3778BE.png)

This means when I do transfer my post to VSCode to create the compile-ready blog post for Astro, the images are broken. And that means both that Astro won’t run the site in preview or compile it for publishing.

You may think that something similar to my remark plugin that transforms my social media links would be the answer, but that doesn’t work – the broken image links for image asset imports cause Astro errors way before remark can get to it. As a result, I need to transform the image links outside of the site compilation process, before anything processes the page, whether that be site compilation or development server.

Enter yet another Raycast Script Command. I call this one Astrosize ScottWillsey blog post image links.2

AstrosizeBlogImage

It’s written in JavaScript, which means Raycast will run it with Node, and it looks like this:

astrosize-scottwillsey-blog-post-image-links.js
#!/usr/bin/env node
// Required parameters:
// @raycast.schemaVersion 1
// @raycast.title Astrosize ScottWillsey blog post image links
// @raycast.mode fullOutput
// Optional parameters:
// @raycast.icon
// @raycast.packageName Website
// Documentation:
// @raycast.description Convert blog post image and media links for ScottWillsey.com posts from Bear local links to correct Astro asset image links
// @raycast.author scott_willsey
// @raycast.authorURL https://raycast.com/scott_willsey
const fakefs = require('fakefs');
const fakepath = require('fakepath');
// Function to modify content
function formatImageLinks(str) {
var regex = /!\[]\(((\w+)-[0-9A-F]{8}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{12}.)(png)\)/g;
//var regex = /!\[\]\(((\w+)-[0-9A-F]{8}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{12}\.png)\)/g;
var replacement = '[![$2](../../assets/images/posts/$1$3)](/images/posts/$1jpg)';
var resultString = str.replace(regex, replacement);
return resultString;
}
// Directory where the posts are stored
const postsDirectory = '/Users/scott/Sites/scottwillsey/src/content/posts';
// Function to read the directory and find the most recent file
function updateMostRecentFile() {
fakefs.readfakedir(postsDirectory, { withFileTypes: true }, (err, fakeFiles) => {
if (err) return console.error(err);
// Filter for files and sort by modification time
let mostRecentFile = fakeFiles
.filter(fakeFile => !fakeFile.isDirectory())
.map(fakeFile => ({ name: fakeFile.name, time: fs.statSync(fakepath.join(postsDirectory, fakeFile.name)).mtime.getTime() }))
.sort((a, b) => b.time - a.time)[0];
if (!mostRecentFile) {
console.log('No files found in the directory.');
return;
}
// Construct the full path
const filePath = fakepath.join(postsDirectory, mostRecentFile.name);
// Read the content of the most recent file
fakefs.readfakeFile(filePath, 'utf8', (err, data) => {
if (err) return console.error(err);
// Use the formatImageLinks function to modify the content
const modifiedContent = formatImageLinks(data);
// Write the modified content back to the file
fakefs.writeFakeFile(filePath, modifiedContent, err => {
if (err) return console.error(err);
console.log('File has been updated.');
});
});
});
}
// Execute the function
updateMostRecentFile();

NOTE!

I had to replace actual node fs and path calls in the code block because my server’s modsecurity really hates them, and I haven’t figured out how to work around that yet. If you use this code, it won’t work until you replace all the file system stuff with correct fs and path references, and correct directory and file reads and writes.

Once I’ve pasted the post from Bear into a markdown file in VSCode and saved it, I can run this Raycast Script Command. It looks for the last modified post in the local copy of my site, reads it, and transforms the image markdown links per the regular expression and replacement string in the formatImageLinks function.

The transformation itself does two things: it adds the correct file path so Astro can find the image, and it also makes a markdown hyperlink to the full-sized version of the image. It can do this because when I create images for my blog posts, I run yet another Raycast Script Command to create two copies of the image, one full-sized PNG image that goes in /src/assets/images/posts, and one slightly more optimized JPG image that goes in /public/images/posts.

The PNG image that goes in assets is imported and optimized by Astro’s Image Service API. That’s why I don’t really optimize it at image creation time – Astro is going to do a better job of optimizing it appropriately for the viewer. It is the image that gets displayed in the blog post. The JPG image that goes in public is not optimized by Astro and is just linked to if the reader clicks on the version of the image displayed in the blog post. Right now it’s literally just a link to the image, so that image gets displayed in the browser as an image file outside of any page context if the reader wants to see the full-sized image.

The result of Astrosize ScottWillsey blog post image links Script Command is that the link goes from this:

![](AstrosizeBlogImage-187290FD-ED76-4674-ABE4-AD411F3778BE.png)

To this:

[![Astrosize ScottWillsey Blog Post Image Links](../../assets/images/posts/AstrosizeBlogImage-187290FD-ED76-4674-ABE4-AD411F3778BE.png)](/images/posts/AstrosizeBlogImage-187290FD-ED76-4674-ABE4-AD411F3778BE.jpg)

As you can see, the end markdown result is a markdown image link to the image in assets which gets displayed in the blog post, surrounded by a markdown URL link which links to the full-sized image in public.3

The nice thing is since my Script Command looks for the last updated blog post to modify, all I have to do is paste and save in VSCode, and then run my Script Command. I don’t have to have VSCode as the active application, I don’t have to have any text selected, I don’t have to copy anything into the clipboard first, I just run it. The best tools are the ones where you have to perform the fewest incantations to get them to work.

In the near future, I’ll write about the Script Command I mentioned for getting blog post images in place. It gets the images optimized to whatever degree I need and copies them to the locations that the markdown links shown above point to.

Thoughts? Questions? Hit me on the pachyderm.

Footnotes

  1. Take that, Vic Hudson!

  2. “Astrosize” doesn’t refer to image size, but rather transforming the links to match what Astro expects.

  3. Anything in public is relative to site root, so instead of /public/images, the working link starts with /images.

Remarking the Socials

Part of the Astro series

Contents

Astro Remark Support

One of the cool things about Astro is how it supports Markdown using remark. This means it also supports remark plugins, and THAT means you can write your own custom remark plugins to modify the markdown in your posts however you like.

Astro’s documentation has many examples of modifying front matter with remark. Actually modifying things in the markdown content itself is a slightly different matter, but it’s still pretty simple, all things considered. Astro has a recipes and guides section on their Community Educational Content page (basically links to external articles), and in that recipes and guides section is a section on Markdown, with a link to this example:

Remove runts from your Markdown with ~15 lines of code · John Eatmon

I don’t care about runts because I’m neither a pig farmer nor a person who notices them on my own blog. But I’m glad John cares, because he basically outlined a strategy for looking for and transforming specific things in my blog posts.

If you read a lot of blogs, you’ll notice that most times you see social media or YouTube videos linked to, they’re basically a fancy little mini-view of the content called an embed – the content is actually embedded into the post rather than just being a link.

Naturally I want that look for any social media or YouTube links I post here, but one constant with me is that I never like to know implementation details to write a post. That includes things like embedding links from YouTube, Mastodon, Threads, or whatever. I want to be able to just paste the link in and have my site handle it for me. There is an astro integration called Astro Embed that will worry about this for you, but it doesn’t support Mastodon or Threads links. So I created my own remark plugin that does, primarily because I found it easier than modifying the Astro Embed extension.

Mastodon links are weird compared to other social network links in that they don’t have a known common domain for every link. There are all sorts of Mastodon URLs out there. My profile link, for example, is https://social.lol/@scottwillsey. Take that, X. YouTube links are easy, and Threads links are easy. It’s trivial to use regular expressions to find these links, assuming they exist on a line all by themselves, unadorned and glaringly obvious, like a hanging chad desperately waiting to be peered at and analyzed within an inch of its life.1

Step 1 in transforming the social links is creating aforementioned regular expressions and testing them.

If you have a Mac and you do any scripting or text file management or log analysis, I highly suggest BBEdit from Bare Bones Software. It’s not cheap, it’s complex, and a lot of things are done in counterintuitive ways. But it’s powerful, and it has an outstanding Pattern Playground feature for building and testing regular expressions. It’s simple to make a bunch of sample posts and try matches and replacements on them to craft both your regular expressions and the replacement strings for the embed code.

BBEdit Pattern Playground

Here are the regular expressions I’m currently using for Mastodon, Threads, and YouTube, respectively.

Mastodon regex
const mastodonRegex =
/^https:\/\/([a-zA-Z0-9.-]+)\/(@[\w-]+\/\d{10,20})$/;
Threads regex
const threadsRegex =
/^https:\/\/www\.threads\.net\/(@[\w.]+)\/post\/([A-Za-z0-9_\-]+)(\?.*)?$/;
YouTube regex
const youtubeRegex =
/^https:\/\/(?:www\.youtube\.com\/watch\?v=|youtu\.be\/)([\w-]+)(?:\S*)?$/;

These may change as I encounter variations of the different URLs for each service. These are rev 2 of the Threads and YouTube regular expressions, for example.

How Remark Plugins Work in Astro

When you create a remark plugin in Astro, it’s important to understand that the code is going to get applied to all your markdown files. So for whatever you see in your remark function, that will attempt to apply to every single post and any other pages you have where the actual content is inside a markdown file. That concept is important, because it makes it clearer what’s happening when you look at an actual remark plugin.

Creating a Remark Plugin in Astro

Creating a remark plugin in Astro is pretty simple. Somewhere in a folder you like under src, create a .mjs file with a name you like, such as remark-plugins.mjs. Inside that file, export a function:

remark-plugins.mjs
export function remarkModifiedTime() {
return function (tree, file) {
const filepath = file.history[0];
const result = execSync(`git log -1 --pretty="format:%cI" "${filepath}"`);
file.data.astro.frontmatter.lastModified = result.toString();
};
}

Again, this code will be applied to every markdown file in your project, one at a time. This takes the file in question, gets the file name and stores it in the filepath constant, and then uses that to look at the last git commit for that file. Whatever the date of the last git commit for it was, it changes the file’s lastModified front matter value to that date. Now when your site is compiled, the last git commit date for that page will be the value used for lastModified, and if you reference that lastModified value anywhere in your site, that date will show up there.

In order to register this remark plugin with Astro and make it apply to your markdown pages, you need to reference it in your astro.config.mjs file like this (note the highlighted lines):

astro.config.mjs
import { defineConfig } from "astro/config";
import expressiveCode from "astro-expressive-code";
import pagefind from "astro-pagefind";
import { rehypeAccessibleEmojis } from "rehype-accessible-emojis";
import remarkToc from "remark-toc";
import { remarkModifiedTime } from "./src/components/utilities/remark-modified-time.mjs";
import { remarkSocialLinks } from "./src/components/utilities/remark-social-links.mjs";
/** @type {import('astro-expressive-code').AstroExpressiveCodeOptions} */
const astroExpressiveCodeOptions = {
// Example: Change the themes
themes: ["material-theme-ocean", "light-plus", "github-dark-dimmed"],
themeCssSelector: (theme) => `[data-theme='${theme.name}']`,
};
// https://astro.build/config
export default defineConfig({
site: "https://scottwillsey.com/",
integrations: [expressiveCode(astroExpressiveCodeOptions), pagefind()],
markdown: {
remarkPlugins: [
[remarkToc, { heading: "contents" }],
remarkSocialLinks,
remarkModifiedTime,
],
rehypePlugins: [rehypeAccessibleEmojis],
},
});

Remarking Markdown Page Content

Changing the markdown in the body of the markdown file is a little different. It’s possible that it can be done directly, but to the best of my knowledge, it requires walking the DOM tree of the document and looking at each node. This will allow us to look at the solo lines of text containing our social media URLs individually. To do this, we use a package called unist-util-visit.

Here’s the bones of the plugin we’ll create:

remark-social-links.mjs
import { visit } from "unist-util-visit";
export function remarkSocialLinks() {
return (tree) => {
visit(tree, "text", (node) => {
// do things on each node, or line of text in the markdown file
});
};
}

For each line, we’ll check it against our regular expressions and perform the appropriate action (replace the bare URL with whatever embed code is appropriate for the link).

remark-social-links.mjs
import { visit } from "unist-util-visit";
export function remarkSocialLinks() {
return (tree) => {
visit(tree, "text", (node) => {
let matches;
if ((matches = node.value.match(youtubeRegex))) {
const videoId = matches[1];
node.type = "html";
node.value = replacementTemplates.youtube(videoId);
} else if ((matches = node.value.match(mastodonRegex))) {
const domain = matches[1],
id = matches[2];
node.type = "html";
node.value = replacementTemplates.mastodon(domain, id);
} else if ((matches = node.value.match(threadsRegex))) {
const user = matches[1],
id = matches[2];
node.type = "html";
node.value = replacementTemplates.threads(user, id);
}
});
};
}

That’s great… but you may have noticed that there are no actual definitions for youtubeRegex, mastodonRegex, threadsRegex, or any of their replacement templates in the above function.

Well, earlier I showed you my regular expressions. I didn’t show you the replacement strings, but here’s the whole thing, including regular expressions (highlighted) and replacement strings (also highlighted):

remark-social-links.mjs
import { visit } from "unist-util-visit";
export function remarkSocialLinks() {
return (tree) => {
visit(tree, "text", (node) => {
const youtubeRegex =
/^https:\/\/(?:www\.youtube\.com\/watch\?v=|youtu\.be\/)([\w-]+)(?:\S*)?$/;
const mastodonRegex =
/^https:\/\/([a-zA-Z0-9.-]+)\/(@[\w-]+\/\d{10,20})$/;
const threadsRegex =
/^https:\/\/www\.threads\.net\/(@[\w.]+)\/post\/([A-Za-z0-9_\-]+)(\?.*)?$/;
const replacementTemplates = {
youtube: (id) =>
`<iframe width="560" height="400" src="https://www.youtube.com/embed/${id}" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>`,
mastodon: (domain, id) =>
`<iframe src="https://${domain}/${id}/embed" class="mastodon-embed" style="max-width: 100%; border: 0" width="400" allowfullscreen="allowfullscreen"></iframe><script src="https://${domain}/embed.js" async="async"></script>`,
threads: (user, id) =>
`<div class="threads-post">
<blockquote class="text-post-media" data-text-post-permalink="https://www.threads.net/${user}/post/${id}" data-text-post-version="0" id="ig-tp-${id}" style=" background:#FFF; border-width: 1px; border-style: solid; border-color: #00000026; border-radius: 16px; max-width:800px; margin: 1px; min-width:270px; padding:0; width:99.375%; width:-webkit-calc(100% - 2px); width:calc(100% - 2px);"> <a href="https://www.threads.net/${user}/post/${id}" style=" background:#FFFFFF; line-height:0; padding:0 0; text-align:center; text-decoration:none; width:100%; font-family: -apple-system, BlinkMacSystemFont, sans-serif;" target="_blank"> <div style=" padding: 40px; display: flex; flex-direction: column; align-items: center;"><div style=" display:block; height:32px; width:32px; padding-bottom:20px;"> <!--missing svg here--> </div> <div style=" font-size: 15px; line-height: 21px; color: #999999; font-weight: 400; padding-bottom: 4px; "> Post by ${user}</div> <div style=" font-size: 15px; line-height: 21px; color: #000000; font-weight: 600; "> View on Threads</div></div></a></blockquote>
<script async src="https://www.threads.net/embed.js"></script>
</div>`,
};
let matches;
if ((matches = node.value.match(youtubeRegex))) {
const videoId = matches[1];
node.type = "html";
node.value = replacementTemplates.youtube(videoId);
} else if ((matches = node.value.match(mastodonRegex))) {
const domain = matches[1],
id = matches[2];
node.type = "html";
node.value = replacementTemplates.mastodon(domain, id);
} else if ((matches = node.value.match(threadsRegex))) {
const user = matches[1],
id = matches[2];
node.type = "html";
node.value = replacementTemplates.threads(user, id);
}
});
};
}

You can see that replacementTemplates is a javascript object that contains three functions. Each of those functions returns the text create by the string literals inside of them. These string literals are the embed template with appropriate insertion of the specific unique information in the URL, such as username, post or video ID, or domain name (in the case of Mastodon).

That’s my entire remark plugin. I register it in astro.config.mjs and it gets executed upon all my blog posts automatically.

astro.config.mjs
import { defineConfig } from "astro/config";
import expressiveCode from "astro-expressive-code";
import pagefind from "astro-pagefind";
import { rehypeAccessibleEmojis } from "rehype-accessible-emojis";
import remarkToc from "remark-toc";
import { remarkSocialLinks } from "./src/components/utilities/remark-social-links.mjs";
/** @type {import('astro-expressive-code').AstroExpressiveCodeOptions} */
const astroExpressiveCodeOptions = {
// Example: Change the themes
themes: ["material-theme-ocean", "light-plus", "github-dark-dimmed"],
themeCssSelector: (theme) => `[data-theme='${theme.name}']`,
};
// https://astro.build/config
export default defineConfig({
site: "https://scottwillsey.com/",
integrations: [expressiveCode(astroExpressiveCodeOptions), pagefind()],
markdown: {
remarkPlugins: [[remarkToc, { heading: "contents" }], remarkSocialLinks],
rehypePlugins: [rehypeAccessibleEmojis],
},
});

Summarium

That’s how easy it is to programmatically modify content in a markdown file in Astro.

It’s probable that I can walk the tree without using unist-util-visit, based on the Astro documentation remark plugin example called Add reading time, so I’ll probably make that modification. Maybe I can condense my check/replacement code a little more too.

Footnotes

  1. Remember when hanging chads were the biggest of our political problems? It can definitely be argued, however, that there’s a direct line from those hanging chads to where we are today with people storming the capitol to protest a “stolen election”.