Past Tense

We are reaching and surpassing dates in real-life that were formerly part of our science fiction. The screenshot which leads off this post is from part 1 of Past Tense, a time travel episode from Star Trek: Deep Space Nine. Given what the episode is about, it is even sadder that barely two months before the date in the screenshot the U.S. Supreme Court ruled that bans against sleeping outside do not violate the Eighth Amendment.

Migrating My WordPress Database from a Lightsail Instance to a Standalone Database

Last year, I moved this blog off of a EC2 instance running a too-old version of PHP to a Lightsail instance. I had to restart that instance in order to retrieve the images associated with all the prior posts so they looked exactly as they did before, but the end result was the same blog at a lower monthly cost. Since then, I installed and configured the WP Offload Media Lite plug-in to push all those images to an S3 bucket. Today I decided to move the WordPress database off the Lightsail instance to a standalone database.

Accomplishing this move required cobbling together instructions from Bitnami and AWS (and filling in any gaps with educated guesses). Here are the steps I took to get everything moved over, in the order I took them.

  1. Export the application database from the Lightsail instance. As of this writing, the Bitnami WordPress image still keeps database credentials in a bitnami_credentials file, so using that with the mysqldump command generated the file I would need to import to the new database (backup.sql).
  2. Download backup.sql to my local machine. Connecting to my Lightsail instance with sftp and my SSH key followed by “get backup.sql” pulled the file down.
  3. Download MySQL Workbench. Looking at these import instructions, I realized I didn’t have it installed.
  4. Create a Lightsail database. On the advice of co-workers who also do this with their side projects, I used us-east-2 as the region to setup in. I specified the database name to match the one in the backup.sql file to make things easier later when it was time to update wp_config.php.
  5. Enable data import mode. By default, data import mode is disabled and public mode is disabled. So I turned on data import mode and was puzzled for a second when I couldn’t connect to the database in order to import from backup.sql.
  6. Enable public mode. With public mode disabled, and my backup.sql file (and tools to import it) not already available in a us-east-2 hosted instance or other resource, I couldn’t load the backup data. Once I enabled public mode, I was able to use MySQL Workbench to connect and upload the data.
  7. Disable public mode.
  8. Update wp_config.php to use new database credentials.

To confirm that the post you’re reading now was written to the new database, I turned on the general query log functionality on the database instance to ensure that the server was writing to it. Having confirmed that, I turned off the general query log.

The additional cost of a standalone Lightsail database is worth it for the week’s worth of database backups you get with zero additional effort. Migrating to a newer WordPress instance in the future should be easier as well, now that both the database and media for the site are off-instance. The next step I need to take is upgrading from the lite version of WP Offload Media to the full one. This should offload all the media so I can safely remove it locally.

Great Customer Service Smoothes Out Bad Self-Service

Success at switching to a truly bundled Disney+ and Hulu experience (both with no ads) from the janky status quo where both services were billed separately and Hulu had ads but Disney+ didn’t required the great customer service experience I had earlier today. In prior months, I’d made the mistake of following the instructions provided as the self-service approach to accomplishing this, and failed miserably. I switched from annual billing to monthly on Disney+ and tried to switch to the Premium Duo multiple times over multiple months, only to be redirected to Hulu and be blocked from signing up for what I wanted.

Today I tried the chat option (with a live human being) and finally got the bundle I wanted–and a refund for the price differential between the new bundle and what I’d been paying. It ultimately took being manually unsubscribed from both Disney+ and Hulu, which the customer service rep accomplished by reaching out to whatever department and systems she needed to, in the span of about 20 minutes. Definitely a 5-star customer service experience–unfortunately made necessary by terrible self-service options.

Plenty of companies almost certainly believe that they will be able to use ChatGPT (or something like it) to replace the people that do this work. But at least initially (and probably for quite awhile after that) the fully-automated customer service experience is likely to be worse (if not much worse) than the experience of customer service from people. I’m very skeptical of the idea that an AI chatbot would have driven the same outcome from a customer service interaction as a person did in this case. And this is in a low-stakes situation like streaming services (some number of which will very likely end up on my budget chopping block in 2024). High-stakes customer service situations will not have the same tolerance for mistakes, as shown in the FTC’s 5-year ban on Rite-Aid using facial recognition for surveillance. These are the sorts of mistakes warned about in the documentary Coded Bias years ago, but I have no doubt that other companies will make the same mistakes Rite-Aid did.

In an episode of Hanselminutes I listened to recently, the host (Scott Hanselman) used a comparison of how AI could be used between the Iron Man suit and Ultron. I hope using AI to augment human capabilities (like the Iron Man suit) is the destination we get back to, after the current pursuit of replacing humans entirely (like Ultron) fails. Customer service experiences that led by people but augmented by technology will be better for people on both sides of the customer service equation and better for brands.

Flipboard Renewing Its Relevance With the Fediverse

Flipboard is jumping into the fediverse with both feet, according to a piece from The Verge. While the fediverse isn’t where I saw the piece first (that would be on Threads), when Flipboard first announced it was experimenting with Mastodon some months back, it was the first time I’d thought about Flipboard in years (much less used it). Since The Verge piece first ran December 18th, it’s been updated with links to both their Flipboard account, and their Mastodon account.

If you’re not familiar with Flipboard, their key organizing principle is the magazine. Articles you read from any number of sources can be “flipped” into a magazine you create, along with any commentary you may want to provide. As in other social media networks, you can follow other members and be followed by them. You can comment on shared articles and other Flipboard members can respond. Another interesting feature (which I never took advantage of myself) is Invite contributors. I presume this feature allows multiple Flipboard members to contribute articles to the same magazine. This might be how The Verge handles its own presence on Flipboard.

Unrelated to the whole fediverse pivot, reviewing the features of Flipboard makes me wonder if they ever actively pursued the sorts of people who write newsletters. From what I’ve seen of Substack, I haven’t seen anything it does as a service that Flipboard doesn’t do as well or better–and they probably have a much larger number of monthly active users.

The key difference I’ve found so far between the mobile app experience and the web experience of Flipboard is that you can only flip articles into Mastodon via the mobile app.

Another thing Flipboard has changed since I last looked at what they were doing with Mastodon is allow you to add any Mastodon profile URL to your Flipboard profile and display a verified link on your profile page. I’ve already set that up and now my profile looks like this:

This is the sort of attention and interest that Tumblr could have generated had they moved more aggressively in exploring integration with the fediverse via ActivityPub. Tumblr is a first-class citizen on IFTTT, an awesome site for creating workflows and automations between a whole host of different services. I have a number of automations (IFTTT calls them applets) that use Tumblr as a destination and a “fedified” Tumblr would have let me automate a lot of posting without having to change a thing. Flipboard simply isn’t set up for that–not without workarounds or hacks (though IFTTT appears to have one that uses Pocket as an intermediary that I plan to try).

If this post has piqued your curiosity about Flipboard’s foray into the fediverse, I encourage you to check out Flipboard for yourself. Follow me there, comment on pieces I’ve flipped, create your own magazine(s), get the Flipboard mobile app and flip good pieces into Mastodon.

(Tech) Education Should Be Free (and Rigorous)

Free tech education is the reality being created by Quincy Larson, the founder of FreeCodeCamp. I’ve been seeing their posts on Twitter for years, but didn’t dive deeper until I heard Larson interviewed recently on Hanselminutes. The 30-minute interview was enough to convince me to add Larson’s organization to the short list of non-profits I support on a monthly basis. One of the distinctions I appreciated most in the interview was the one made between gate-keeping and rigor. Especially in the context of certifications (in an industry with an ever-growing number of them), making certifications valuable is a challenge that FreeCodeCamp solves by making them challenging to get. Having pursued a number of certifications over the course of my tech career (earning a Certified Scrum Master cert a couple of times, the AWS Certified Solution Architect Associate, and an internal certification at work for secure coding), I’ve seen some differences in how the organizations behind each certification attempt to strike that balance.

  • Certified Scrum Master. Relative to cloud certifications for AWS, Azure, or Google Cloud, CSM certification is much easier. Two days in an instructor-led training course, followed by a certification exam and you have a certification that’s good for 2 years. I don’t recall what my employers paid for the courses to get me certified each time, but these days you can spend anywhere from $500-$1100 per person for the 2-3 day class and exam. I think the minimum score to pass is 80%, and one of my classmates the last time I certified got 100% (I missed out on that by a single question). In short, less rigorous (and far less gate-keeping).
  • Certified AWS Solution Architect Associate. I spent months preparing for to take this certification exam. Just the associate-level exam itself costs $150. The self-study course and practice exams I took (both from Udemy) normally cost $210 combined, though there are plenty of other options both online and instructor-led (I expect the latter would cost significantly more per student than instructor-led training for other certifications. Achieving the minimum score to pass (usually around 70%) is far from certain, given the sheer amount of material to retain and the high level of rigor of the questions. I ended up scoring around 80% but I really had to sweat for it. Much more rigorous, but rather low on gate-keeping as well because of the relatively low cost of self-study and practice exams (and the ability to do hands-on practice with the AWS Free Tier with a personal AWS account).

The key value of rigor is that the process of preparing to take a certification exam should meaningfully apply to actually doing the work the certification is intended to represent. My experience of pursuing AWS certification is that the learning did (and does) apply to design discussions. It’s given me valuable depth of understanding necessary to push my teams to fully explore different services for building features. One of my direct reports used the knowledge gained from certification to build equivalent functionality out of AWS services approved for use inside our organization to approximate the functionality of an AWS service currently not approved for use (in order to integrate with a third-party vendor we were working with).

When I talk to people in different fields where certifications are available, I get the distinct sense that there are varying degrees of gate-keeping involved (a practice that tech companies are certainly no strangers to). My wife has said this often regarding HR certifications offered by SHRM. She’s been an HR director for over 20 years (without that certification) but hasn’t been able to pass the certification exam (despite having a master’s degree in HR management).

When considering whether or not to pursue a certification, it’s definitely a good idea to look at them from the additional perspective of whether they are gate-keeping–or providing rigor–not just if they will help you advance your career. If you can, find out from people who’ve actually earned the certification whether they feel like it helped make them better at their job. Some certifications are must-haves regardless of their rigor or utility, either because your employer requires them or because eligibility to pursue certain contracts requires them (particularly in the federal contracting space).

Everything Old is New Again: Social Bookmarking Edition

According to this TechCrunch article, a Fediverse-powered successor to del.icio.us is now available. Back in the olden days of the web, I regularly posted links there to articles that I wanted to share or read later. I moved on from del.icio.us to Instapaper, and used it a ton (and actually read more of the content I saved there) because of the send-to-Kindle feature. Enough years have passed that I don’t recall exactly when I switched from using Instapaper to Pocket, but it might have had to do with original creator (Marco Arment) selling a majority stake to another company.

In the true spirit of the decentralized web, Postmarks is available as code in GitHub that you choose where to host (and connect to the Fediverse) yourself. Per the readme file, the creator of Postmarks put his thumb on the scale in favor of Glitch as a place to host your own instance. I played with Glitch briefly back in February when I first heard of it and found it to be a quick and powerful way to stand up new static or dynamic websites for whatever you wanted (within reason). So I started by visiting the default site the creator of Postmarks set up, pressing the Remix on Glitch button, and started renaming things per the instructions.

I used 1Password to generate the ADMIN_KEY and SESSION_SECRET values for my remix of Postmarks. I initially changed the username from the default (bookmarks) but since the Fediverse name Glitch-hosted sites resolve to is @bookmarks@project-name.glitch.me, I though the default (@bookmarks) worked quite well. Other changes I’ve made to the remix so far include changing the size of the read-only textbook on the About page with the site’s ActivityPub handle and changing the background color from pink to more of a parchment color.

Other minor changes I expect to make include:

  • Fonts
  • Unvisited and visited link colors

I’ve tried searching for the new handle with the Ivory client but it hasn’t shown up yet. There are other features I haven’t tried yet, like the Bookmarklet and Import bookmarks features that I will write about in a future post.

Will AI Change My Job or Replace It?

One of my Twitter mutuals recently shared the following tweet with me regarding AI:


I found Dare Obasanjo’s commentary especially interesting because my connection to Stack Overflow runs a bit deeper than it might for some developers. As I mentioned in a much older post, I was a beta tester for the original stackoverflow.com. Every beta tester contributed some of the original questions still on the site today. While the careers site StackOverflow went on to create was sunsetted as a feature last year, it helped me find a role in healthcare IT where I spent a few years of my career before returning to the management ranks. Why is this relevant to AI? Because the purpose of Stack Overflow was (and is) to provide a place for software engineers to ask questions of other software developers and get answers to help them solve programming problems. Obasanjo’s takeaway from the CEO’s letter is that this decade-plus old collection of questions and answers about software development challenges will be used as input for an AI that can replace software engineers altogether. My main takeaway from the same letter is that at some point this summer (possibly later) Stack Overflow and Stack Overflow for Teams (their corporate product) will get some sort of conversational AI capability added, perhaps even without the “hallucination problems” that have made the news recently.

Part of the reason I’m more inclined to believe that [chatbot] + [10+ years of programming Q & A site data] = [better programming Q & A resource] or [better starter app scaffolder] instead of [replacement for junior engineers] is knowing just how long we’ve been trying to replace people with expertise in software development with tools that will enable people without expertise to create software. While enough engineers have copied and pasted code from Stack Overflow into their own projects that it led to an April Fool’s gag product (which later became a real product), I believe we’re probably still quite some distance away from text prompts generating working Java APIs. I’ve lost track of how many companies have come and gone who put products into the market promising to let businesses replace software developers with tools that let you draw what you want and generate working software, or drag and drop boxes and arrows you can connect together that will yield working software, or some other variation on this theme of [idea] + [magic tool] = [working software product] with no testing, validation, or software developers in between. The truth is that there’s much more mileage to be gained from tools that help software developers do their jobs better and more quickly.

ReSharper is a tool I used for many years when I was writing production C# code that went a long way toward reducing (if not eliminating) a lot of the drudgery of software development. Boilerplate code, variable renaming, class renaming are just a few of the boring (and time-consuming) things it accelerated immensely. And that’s before you get to the numerous quick fixes it suggested to improve your code, and static code analysis to find and warn you of potential problems. I haven’t used GitHub Copilot (Microsoft’s so-called “AI pair programmer) myself (in part because I’m management and don’t write production code anymore, in part because there are probably unpleasant legal ramifications to giving such a tool access to code owned by an employer), but it sounds very much like ReSharper on steroids.

Anthony B (on Twitter and Substack) has a far more profane, hilarious (and accurate) take on what ChatGPT, Bard, and other systems some (very) generously call conversational AI actually are:

His Substack piece goes into more detail, and as amusing as the term “spicy autocomplete” is, his comparison of how large language model systems handle uncertainty to how spam detection systems handle uncertainty provides real insight into the limitations of these systems in their current state. Another aspect of the challenge he touches on briefly in the piece is training data. In the case of Stack Overflow in particular, having asked and answered dozens of questions that will presumably be part of the training data set for their chatbot, the quality of both questions and answers varies widely. The upvotes and downvotes for each are decent quality clues but are not necessarily authoritative. A Stack Overflow chatbot could conceivably respond with an answer based on something with a lot of upvotes that might actually not be correct.

There’s an entirely different discussion to be had (and litigation in progress against an AI image generation startup, and a lawsuit against Microsoft, GitHub, and OpenAI) regarding the training of large language models on copyrighted material without paying copyright holders. How the lawsuits turn out (via judgments or settlements) should answer at least some questions about what would-be chatbot creators can use for training data (and how lucrative it might be for copyright holders to make some of their material available for this purpose). But in the meantime, I do not expect my job to be replaced by AI anytime soon.

GenXJamerican.com Moves to Amazon Lightsail, A Follow-Up

One change I missed after migrating to Lightsail, was ensuring that all the posts with images in them were displaying those images on the new site the way they were on the old. A scroll backward through previous posts revealed the problem quickly enough, but life is busy so it took awhile until I had enough time fix it. The steps I expected I would need to take to resolve the missing images issue were roughly the following:

  • Start up the old EC2 instance
  • Download the old images
  • Upload the old images to the new instance on Lightsail

Because I only stopped the previous EC2 instance instead of terminating it, I was able to re-start it. To download the old images, I’d have to find them first. Having self-hosted WordPress for awhile, I knew the images would be in subfolders under wp-content/uploads, so the only real question remaining was where exactly the old Bitnami image rooted the install. Once I “sshed” into the instance, that location turned out to be ~/stack/apps/wordpress/htdocs/wp-content/uploads. Images were further organized by year and month of blog posts. To simplify the downloading of old images, I had to knock the rust off my usage of the tar command. Once I’d compressed all those years of images into a few archive files it was time to get them off the machine. I used this Medium post to figure out the right syntax for my scp commands.

Once the archive files were on my local machine, I needed to get them onto the Lightsail instance (and expand them into its uploads folder). But just as I did compressing and pulling the files down from the EC2 instance, I had to figure out where they were in the new Bitnami image. As it turned out, the path was slightly different in the Lightsail image: ~/stack/wordpress/wp-content/uploads. Once I uploaded the files with scp, I had to figure out how to move them into the years and months structure that would match my existing blog posts. Using the in-brower terminal, I was reminded that the tar command wouldn’t let me expand the files into an existing folder structure, so I created an uploads-old folder and expanded them there. Then I had to figure out how to recursively copy the files there into uploads. It took a few tries but the command that ultimately got me the result I wanted was this:

sudo cp -R ./uploads-old/<year>/* ./<year>

Now, every post with images has them back again.

GenXJamerican.com Moves to Amazon Lightsail

Before last year ended, I moved this blog off its EC2 instance running a too-old version of PHP to an Amazon Lightsail instance in a new region. The original rationale for hosting on EC2 was to have a project and a reason to do things in AWS other than whatever a certification course might teach. But having finally earned that AWS Certified Solution Architect Associate certification last spring (and paid more in hosting fees than a blog as small as this really merits), the switch to a simpler user experience and lower cost for hosting was overdue.

Lightsail made it simple to launch a single self-contained instance running the latest version of WordPress. The real work was getting that new instance to look like the old one. Getting my posts moved over wasn’t hard, since I make a regular habit of using Tools > Export > All Content from the dashboard to ensure I have a WordPress-compatible copy of my posts available. The theme I use however (Tropicana) recommends far more plugins than I remember when I first chose it. The Site Health widget nags you about using a persistent object cache, so I tried getting the W3 Total Cache plugin working. I kept seeing an error about FTP permissions that I couldn’t resolve so I got rid of the plugin and Site Health said the server response time was ok without it. Another plugin I got rid of was AMP. Something about how I had AMP configured was seemed to prevent the header image from loading properly. With AMP gone, everything worked as before. Akismet Anti-Spam and JetPack are probably the most important plugins of any WordPress install so I made sure to get those configured and running as soon as possible.

The last change I needed to make was the SSL certificate. The Lightsail blueprint for WordPress (the official image from Bitnami and Automattic) has a script which automatically generates certs using Let’s Encrypt. When the script didn’t work the first time (because I’d neglected to update my domain’s A record first), I went back and made that change then shut down the (now) old EC2 instance.

GenXJamerican 2.0 still needs some more changes. I used to have a separate blog just for photos, years ago when one of my best friends was hosting WordPress instances. The Social Slider Feed plugin lets you pull in content from Instagram and other social media sites, so I’ve added those to a Photos page. Once I figure out the photo gallery plugin, that should be the next update. I’ll also be looking into the ActivityPub and WebFinger plugins as part of my growing interest in Mastodon.

Owning My Words, Revisited

A few years ago, I wrote this brief post, after Scott Hanselman re-tweeted one of his blog posts from 2012. In the wake of last year’s takeover of Twitter by Elon Musk, I’ve been pointing people to Hanselman’s decade+ old advice because I’m seeing it repeated in various forms by others (Monique Judge of The Verge is the most recent example I’ve read). In the time since that November 2019 post, I’ve published at least 60 posts (with a couple dozen more still in drafts). But the best-written and fiercest piece I’ve read on the subject is this Substack post by Catherynne M. Valente.

Her piece is well worth reading in full and sharing with friends. I’m just 5 years older than Valente, and reading it gave me a flashback to the very first page I ever put on the web. It was probably back in 1994, since the Mosaic browser had just come out the year before. I was a sophomore computer science major at University of Maryland then, so it would have been wherever they let students host their own pages. It was just some fan page for the team they used to call the Washington Redskins.  I somehow figured out how to take an image of the team’s helmet and make it look debossed under everything else I put on the page.  It was the first time I got compliments from strangers for something I did on the internet (in a Usenet newsgroup for fans of the team).  Usenet is how I joined my first fantasy football league.  Many of the guys I met online in that league back in 1993 are still friends of mine today. I later met a number of them in-person when I visited the Pacific Northwest for the first time (and I’ve been back a couple more times since).  Usenet is how dozens of us Redskins fans ultimately met in-person and attended a Redskins game together in San Diego (LaDainian Tomlinson’s rookie debut in 2001, and Jeff George’s debut as Redskins starting QB).  So much life has happened since then that until I read a post like Valente’s, it’s very easy to forget all the different ways in which much less sophisticated tech than we have today proved to be very, very good at helping us make meaningful, durable connections with each other.

The 12-point plan of how online communities are created and ultimately destroyed is the heart of her piece. A lot of the friends I first made on on Usenet, or even email distros have migrated through a lot of the same sites Valente listed as having fallen victim to that plan. The migrations to Mastodon (or Instagram, or Slack, or Discord, or Reddit, or SMS groupchats, etc) sparked by Twitter turning into $8chan (as some only half-joking call it now) is a reminder of many previous site & app migrations. Personally, I’m splitting the difference–spending a bit more time on Slack with friends, an ongoing chat with my cousins via GroupMe, and more time on Mastodon in favor of a bit less time on Twitter (less doomscrolling at least). Particularly in the depths of the pandemic (which sadly still seems far from over), some of my Twitter mutuals found and formed a real community in a direct message group. There are nearly 20 of us, all black, in business, tech, academia, science, and journalism among other fields. They’ve been some of the most encouraging people regarding my writing beyond my own family. One of them gave me the opportunity to be a panelist on a discussion of diversity in tech. I continue to learn from them through our ongoing conversations and value our connections enough to have shared other contact info with them if Twitter does go down.

Some of us have already learned that the grass isn’t always greener elsewhere when it comes to social media. What’s being done to Twitter by Elon Musk right now–as much value as I still personally gain from using it–has been an opportunity to reconsider how I engage with social media. I’ve been much more selective about who I follow on Mastodon (just 85 people vs over 800 on Twitter) and am seeing a lot more technical content as a result. This change in my social media experience is intriguing enough that by this time next year I may be one of those people who went from having just a basic grasp of how Mastodon worked to self-hosting an instance and writing all about the experience.