Incremental Improvements

Eventually the daily cutups at will be completely automated. But today we are not quite there yet. While a series of scripts create the cutups and collages, choosing and inserting the collage into the post each day remains a manual task. I don’t really know how to automate that part yet, but I have some ideas and realising those is a bit further down the road.

Since I knocked together some scripts to create the cutup each day last November I’ve only been making some small changes and tweaks here and there. Formerly the cutup job would run at a given time, and choose the 100 last stories sorted by the id the database assigned to a piece of content. Not necessarily the most recently updated stories by date. And besides, what I really wanted was something like a random selection of stories over the past day. Not the most recent 100 stories from a choose time. Which is what I was trying to achieve and kind of failing to get. So today I changed the query from this:

select content from FEEDENTRYCONTENTS order by id DESC,id desc LIMIT 100;

Which ordered the results solely based on the arbitrary id number the database assigned to the piece of the content. To this slightly more complicated query:

select content FROM FEEDENTRIES INNER JOIN FEEDENTRYCONTENTS ON content_id = where updated > DATE_SUB(CURDATE(), INTERVAL 1 DAY) order by RAND() LIMIT 100

Which is simply joining the entries table which has the date the entry was updated, and selected 100 random entries from the past day. I am not exactly sure how it is defining the past day, it seems like entries could be from the current day, all the way back to just after midnight for the previous day. So maybe the results from a little bit more that the past 24 hours. Which is good enough for me.

It’s also possibly for me to grab the url of each story, and the thought crossed my mind to fetch more text from the actual url that my RSS reader is serialising. Not all feeds provide the whole text of the story, just a summary and sometimes nothing but a title. But I am getting enough a text to work with so not really inclined to fetch more. Whatever is in the RSS reader’s database is what I have to work with.

There are more improvements I want to make on this long road to full automation. I would like to use those url’s for each story to randomly create hyperlinks to the source stories, and just randomly allocate them to bits of text in each post. No idea how to do that yet, but it will take some experimentation I am sure. I might be creating something like a only I need to think of a better thing that staging to call it. Maybe to start experimenting more while keeping the current process going.

Welp, next incremental improvement might be to create some montages automatically when I have enough small images to work with. Something like…

MONTAGES=$(for i in coalesce-*; do identify $i | grep 150x150; done | wc -l)
if [ "$MONTAGES" -gt "100" ]; then
echo "create montages"

And they say this isn’t work.






Leave a Reply

Your email address will not be published. Required fields are marked *