You may remember that I wrote about a month ago about my Hazel workflow to enable me to publish blog posts from my iPad. That has been working well for me, but it was — by my own admission — a rather gaffer-taped together solution. One minor problem was that I my iMac isn’t awake all the time, just most of the time. So the publishing workflow is most vulnerable when I’m away from home (and I most need it), and my iMac is likely to be asleep. Since I’ve got an always-on Linode server, which actually serves the website, it seemed silly not to use it to facilitate my remote publishing.
There are a number of different ways that I could have set things up. I could have just automated the fetching of the articles to be published, and then just set off the build process on the server manually using Panic’s Prompt application on the iPhone or iPad. That would have worked and been little trouble, but I felt that if I was going to the trouble of scripting a solution, I might as well automate the whole process. As I’m still getting to grips with Python, it was a nice little project to teach myself about Python modules and so on, and get to know some of the built-in modules better. More importantly, I had a lot of fun solving all the little problems.
In outline, it works like this:
- My script (a
main.pyscript plus some separate modules) is run periodically by
cron1 on the server every 20 minutes. This interval seems about right to me, but it could easily be changed.
- Before I do anything else, I pull down the latest changes to the blog’s repository using
git, so that any articles I’ve published the normal way are included, and the site is complete.
- I use the Dropbox Core API to connect to Dropbox and get the contents of my ‘ToPublish’ folder. If the folder is empty, there’s nothing to do, and the script logs this and exits.
- If the ‘ToPublish’ folder does have one or more files in it, we start a loop, examining each file in turn. Most of the time, there is only likely to be one file in the folder, but the script will cope with multiple files to be published.
- The file is downloaded to a temporary directory on the server, and parsed to get the Title, Slug and Date from the metadata at the top of the file. This information is then used to form a proper filename (with the
.mdextension), and the renamed file is moved to the content folder. This is handy because I write posts on the iPad in WriteUp. It allows you to use TextExpander shortcuts, so I can automate inserting the metadata in the file, but naming the file is fiddly and you can only use the
- The metadata is used to move the original file from ‘ToPublish’ to the the ‘Published’ folder and rename it. You don’t want stuff hanging around in ‘ToPublish’ as it would be duplicated the next time
cronfires. Moving it to the ‘Published’ folder gives a backup plan if things go awry.
- Once all the files have been processed, the building starts. Pelican is called to build the site, but this time, it places the constructed HTML and CSS files in the actual web directory on the server, so it is immediately published.
- Now that we’ve got new content, we call
gitagain to add, commit and push the new articles to the repository.
- A variant on my previous ping script pings Feedpress to get it to update the feed, and informs the PubSubHubbub hubs that there’s new content.
- Finally — the finishing touch — I use Postmark (via the pystmark library) to send information that I’ve logged to a file along the way to me via email, so that I’m notified when the site is built. Postmark is really easy to use and you get 10,000 emails for free, which will basically last me forever on my current rate of usage.
That’s it! If this post has been published, it works. In my usual devil-may-care coding style2, it’s currently all very rough and ready and completely without exception handling. I’ll tidy it up a bit in due course, but I’m happy with the way it works right now, and I’ve learned a lot about Python along the way.