Recent ebook roundup

Picked up from Kobo:

  • The Unleashing, by Shelly Laurenston. Urban fantasy/paranormal romance. Book 1 of her Call of Crows series. Nabbed this because of it going on sale, and because I keep hearing this series get gushed about on Smart Bitches as an example of a series with excellent camaraderie between female characters. (I really wish the cover wasn’t a shirtless dude in a hoodie, if there’s that much emphasis on female relationships, but hey, romance marketers don’t listen to me!) Also a heaping helping of Norse-based worldbuilding going on in this series, and I’m here for that.
  • An Illusion of Thieves, by Cate Glass. Fantasy. Book 1 of her Chimera series. This has gotten a lot of buzz about being essentially a heist story, but in a fantasy setting. It sounds fun, so when it went on sale I snapped it up.
  • Untamed Shore, by Silvia Moreno-Garcia. This is Moreno-Garcia’s first thriller, and I thought the plot sounded intriguing. Plus, I’ve read a little bit by this author before and I want to read more of her.
  • The Name of the Rose, by Umberto Eco. Nabbed this by spending some Super Points on my Kobo account, and because we’re reading this in book club.
  • Stormsong, by C.L. Polk. Book 2 of her Kingston Cycle series. Nabbed this because I really enjoyed Witchmark, and I’m looking forward to this second book in the series, starring the sister of the hero from the first one. And an F/F romance too!
  • The Unspoken Name, by A.K. Larkwood. Fantasy, book 1 of The Serpent Gate. Grabbed this one on the strength of this review at Tor.com, and because LESBIAN. ORC. ASSASSIN. Yes please I’ll have some!
  • The Dragonbone Chair, The Stone of Farewell, and To Green Angel Tower, by Tad Williams. Books 1-3 of the Memory, Sorrow, and Thorn trilogy. Fantasy, a series I’ve read before and which I own in print. Nabbing these in ebook because my print copies of these are gigantic hardbacks and I’d rather like to read these again.

Picked up from Comixology:

  • Harleen, by Stjepan Šejić. Graphic novel. This is a retelling of Harley Quinn’s origin story, which I nabbed in digital form after seeing it mentioned in the comments on the Tor.com review of Birds of Prey. Since I enjoyed the movie quite a bit, I was very much in the mood to check out this graphic novel. And I burned through it as soon as I bought it, because the art is gorgeous and the story is thoroughly engrossing.

And, pre-ordered from Kobo:

  • The Shadow of Kyoshi, by F.C. Yee. Book 2 of the Kyoshi duology from the world of Avatar: The Last Airbender. Book 1 rocked and I am VERY on board for book 2. :D
  • Mexican Gothic, also by Silvia Morena-Garcia. Saw this mentioned when I went looking for the author’s Twitter account and went ZOMG at the description of it as a re-invention of the Gothic horror/suspense novel. This one’s set in 1950’s Mexico, and the author’s page for it includes an endorsement that compares it to Mary Stewart . I need it in my brain RIGHT NOW.

21 for the year.

Put up a new post on angelahighland.info

I’d posted last month on my angelahighland.info site about how our webserver, the very server where annathepiper.org is hosted, kept crashing.

Good news! I’m pleased to report that for the time being, anyway, we’ve found a solution. I also put that post on angelahighland.info instead of here, just because that’s where the first post was. But if you’re interested, you can find the details right over here.

Hard drive upgrade for my laptop

Testing, testing, testing. I upgraded the hard drive in my computer yesterday, putting in a brand new SSD, and wow booting this thing up is smokin’ fast now.

The overall process I followed was:

  1. Take the computer apart so I could take out the old drive
  2. Put new drive in and put computer back together
  3. Install Catalina as a brand new install
  4. Use Migration Assistant to pull my data off my last Time Machine backup

The first big hiccup I ran into with this were that it took me three tries to get a viable USB installer for Catalina. Fortunately we have other Macs in the house, so all props to Paul for letting me use his downstairs system to generate the third USB installer, which was successful.

The second big hiccup was getting the new install of Catalina to actually see my Time Machine backup. Normally I run Time Machine over our house LAN, and the older laptop that acts as our Time Machine server saves my backups out to a USB drive attached to that. I attached that USB drive directly to my laptop. But Migration Assistant didn’t realize what backup I wanted to use until I specifically went into Finder and mounted the sparsebundle. Once I did that, Migration Assistant went “oh you mean THIS backup” and proceeded to let me actually pull data out of it.

That migration process went smoothly, though the wild vacillation of time estimates was kinda hysterical. It dropped from about “7 hours 57 minutes” (and Dara and I expecting this would have to run overnight) down to about “2 hours 20 minutes”, and then plummeted from there to somewhere around 20 minutes. For way longer than 20 minutes, at which point it also vacillated wildly between 20, 38, 17, 11, 18, and other numbers in the range. The speed at which the drive was operating kept fluctuating too, and we didn’t know why. Dara’s theory was that maybe Migration Assistant had to go up and down through various levels of Time Machine backup to get a good read on all the things it had to pull out. But we don’t know this for sure.

Third big hiccup so far was that the system was confused as to letting my Apple ID log in. I boot the thing up and it goes “hey your Apple ID needs to log in to allow various things to work”. I’m all “cool” and I try to log in with it… only to get an error message that said, and I quote, “An unknown error has occurred.”

This was, shall we say, less than helpful.

So I had to go googling as to what the hell to do to fix that. Tried several unsuccessful things until I landed on this article on AppleToolBox, which provided some terminal-level commands that ultimately did the trick.

Fourth hiccup: Mail initially refused to let me import messages, swearing up and down that I didn’t have enough space in my home directory. It lied. It was also confused as to WTF the actual problem was: i.e., a permissions issue, given that I had created a brand new user on the system when I pulled in my Time Machine data, and it didn’t think the Mail directory was properly owned by that user. So I had to fix that. What ultimately worked for me were steps provided in this Apple forums thread.

Mail also had issues letting me back into some of my accounts, but I think this may have been part and parcel of the Apple ID problem? Once I fixed that and reinstated my various mail accounts, Mail seemed happy accessing them.

Fifth hiccup: the program I use to manage my reminders and tasks, Things, also had a permissions issue. I wound up locating where it stores its database with the help of this article, and fixed the permissions on that, similar to what I did for Mail. (In this case, that meant getting into the terminal, finding the thing, and throwing chown at it.)

As of this writing, these are the things so far that have made the process bumpier than I would have liked. But major functionality on the system now seems to be in place. I am really pleased with how fast the thing boots up now. And hopefully once Catalina finishes going “OH HEY NEW DRIVE LET ME INDEX ALL THE THINGS”, I should see an overall general performance boost. Which should extend the life of this machine a little while longer, until it finally stops getting security updates and I have to upgrade to a new system.

Confirmed working so far:

  1. Apple ID login
  2. Syncing to my phone and iPad
  3. Dropbox
  4. Mail
  5. Logging into various things I usually log into in my browser (social media, mostly, but other frequently visited sites as well)
  6. Things
  7. Password manager
  8. RSS readers (I have two)

Still to check:

  1. Making sure all my documents and photos and other files came over safely off the Time Machine backup (means checking the Desktop, Documents, Music, and Downloads directories just to make sure everything looks in order)
  2. Scrivener
  3. Google Drive
  4. Calibre
  5. LINE (which I use to talk to my guildmates in Dungeon Boss)

Once all the major things have been checked, I’ll feel comfortable with reinstating Time Machine backups. But I wanted to get all this documented while it was fresh in my brain!

(And oh yeah, I can also report that doing a fresh install of Catalina does not appear to have fixed the weirdness in my playlists on my phone and iPad. Boooooooo. Apparently I’ll have to wait for Apple to fix that properly. Oh well!)

A few ancient sketches

I’ve been very slowly going through an assortment of random old things from one of the upstairs bookshelves, trying to decide what I wanted to keep and what I didn’t. One of the things in this assortment was a “sketch diary” type notebook, dating from the era of the original Murkworks, during that short span of time when I was playing around with colored pencils and trying to figure out how to draw.

I only had sketches on the first three pages of the notebook, which just goes to show you how far I got with that attempt at drawing. Since I haven’t touched the thing since–and we’re talking well over 15, maybe over 20 years here–I decided to recycle the sketchbook.

But not before scanning in the sketches. I wanted to keep them for posterity, particularly since they’re mostly attempts at drawing Two Moons characters. And I wanted to share them here!

(Editing to add: Dreamwidth readers, you’ll want to click over to annathepiper.org to actually see the pictures. The plugin I’m using for rendering galleries doesn’t play well with being crossposted to Dreamwidth, sorry about that!)

Continue reading “A few ancient sketches”

Now commencing the 2020 ebook roundups

I’ve been doing website juggling what with having to transfer my main author site operations from angelahighland.com to angelahighland.info. Which means my more non-writing related posts are going up on annathepiper.org instead!

Like my book purchase roundups. Here’s the first for 2020.

Acquired from Kobo:

  • Destiny’s Embrace, Destiny’s Surrender, and Destiny’s Captive, all by Beverly Jenkins. These are all historical romances, and specifically featuring protagonists of color in Civil-War-era (and I think post-Civil-War?) America. Jenkins has been on the Smart Bitches podcast a couple of times, and she seems delightful, so I finally bought a few of her books when I saw them on sale for $1.99 each.
  • Truthwitch, by Susan Dennard. YA fantasy. Grabbed this because I had liked the cover when I first saw this one come out a couple of years ago, and because it went on sale for $2.99. (And I was slightly chagrined to see that shortly after that, Tor.com offered this as their free book for the month for January.)
  • Lord of the Last Heartbeat, by May Peterson. Fantasy romance. Grabbed this because a) hey, it’s another Carina author writing fantasy romance, and b) one of the protagonists is non-binary. Awesome. \0/

Acquired from Amazon:

Grabbed all three of these because they’re titles that were pulled out of the RITAs due to the big scandal with RWA over the tail end of December and the beginning of this month. There was a nice roundup page on Amazon with links off to the titles to buy and support the authors, and these were all ones that looked interesting.

  • The Magnolia Sword: A Ballad of Mulan, by Sherry Thomas. I’ve read some Thomas (her Lady Sherlock series), and I’d like to see her take on Mulan.
  • The Orchid Throne, by Jeffe Kennedy. Fantasy romance. I know of Kennedy via Carina as well! And I’ve been meaning to read her work for a while now.
  • Polaris Rising, by Jessie Mihalik. SF romance. Grabbed this one, I’ll say straight out, because of the similarity of title to Jupiter Ascending. If this book hits the same sort of “big silly fun” sweet spot that movie did for me, I’ll enjoy it immensely.

Acquired from Gutenberg.org:

  • A Vindication of the Rights of Woman / With Strictures on Political and Moral Subjects, by Mary Wollstonecraft. Pulled this down from Gutenberg because we’re going to read this for book club.

Acquired so far for the year: 9

Page Object Model testing

In the ongoing process of doing my code work on my Github, I came across an idea that I’d actually encountered before. But it was one for which I’d never previously had an identifying term, and I was excited to learn about this.

Namely, Page Object Model testing. (Not to be confused with Project Object Model, which is what the POM in a pom.xml file stands for when you’re working with Maven.)

What this is: a way of writing a test framework that separates “code that represents the thing you’re testing” from “code that actually does the testing”. Turns out I’d learned about how to do this at Big Fish, when I picked up the idea and how to implement in Python.

Back in those days, it helped to do this because it let me have test cases that basically said “okay go get me the page I need to test, and stick all the data representing it into this object, which I will then do tests against”. It meant that in setup for tests, all I had to do was go grab an instance of the object that represented the page to be tested. And that meant in turn that the tests themselves were more tightly focused.

I found that it required a bit more organizational work to set up, but that once I had the idea in place, it meant writing future tests became easier. For example, if I had test script A that tested the homepage of the site, and then later needed to write test script B against the same page, I wouldn’t have to rewrite the code that loaded the homepage for testing. Likewise, if something about the structure of the homepage changed, I would only have to change the class that dictated that structure, with possibly only minor changes to any test scripts that needed to deal with it.

I liked this way of organizing code well enough that I have been implementing it on my Github repos. In the Python Selenium demo, specifically.

But, now that I’m done with the initial wave of Selenium tests and am looking at ways to expand the suite in both Java and Python, I started thinking of how to rearrange the organizational structure of both suites and seeing whether I could do a similar structure in Java. That led me to discovering that this whole concept had a name.

And it also led me to being able to implement Page Object Model testing in the Java Selenium repo, too.

What this means in practical terms is that I can think of pages on my test WordPress site in terms of “here is an object that represents an entire page, including child objects”. These child objects would be things that are shared across all pages, like a sidebar, or a footer, or a menu. Their various objects are things I have to implement only once in a Page Object Model system, and then use as necessary in tests that actually look at them on different pages.

So all in all this has been a satisfying area of research.

Some links I’ve used to read up about this:

Page Object Model (POM) | Design Pattern on Medium.com

Getting Started with Page Object Pattern for Your Selenium Tests on Pluralsight.com

A few quick definitions

Regarding the last post I put up (here if you’re reading this on annathepiper.org, and here if you’re reading it on Dreamwidth), I thought I’d do another quick post with some definitions of terms I’m throwing around, for those of you who aren’t in the tech industry and might not know what I’m talking about:

API: An API is basically a known set of ways that a program, operating system, or in this case a website makes available for others to hook into and use. For example, Apple has an API for developers to use if they want to create apps for iOS. What I’m doing is playing with the REST API that WordPress makes available as part of the WordPress code.

Service: A service is a thing that sits underneath a website and does a lot of under-the-hood things for it. I described this on Facebook as being part of a website’s engine. It’s not something a user would see just interacting with a site in their browser, but it’s an important thing nonetheless, and it’s there to help the website do its job.

REST API: REST is specifically one type of format an API might take for access to a web service. This Wikipedia page has more if you want to read up on it. But for purposes of this post, I’ll simply say that the REST API endpoints I’ve worked with to date, both in the context of a job and on my personal coding projects, are endpoints that return JSON payloads that I can test against.

JSON: JSON is a file format with a specific syntax and structure. It gets used a lot in web services as it’s reasonably easy to parse, as well as hand back and forth through various steps of a web site’s operation. So when I see a JSON payload come back off a web service endpoint, I can use my test automation to analyze that payload and look for interesting things in it. For example, an HTTP response code, like a 404. Or an error message or error code. OR, if the test is specifically looking for a successful response, I might be looking for an expected title or body of content. Since I know I’m dealing with JSON, I can set up my code to drill down into the payload and look for these specific things.

Endpoint: An endpoint is basically one of several routes you can use to do things with a service. It’s more or less a URL that you can use to make the service report back on various things.

So with all of those identified, does it make more sense now if I say that my demo tests are hitting the documented WordPress REST API endpoints, looking at the JSON I can get back from those endpoints, and analyzing them for various things? Let me know! And let me know if you have any questions. :)

Negative test cases

On a previous job interview loop, one of the people I spoke with gave me good feedback about the work I’ve been putting up on my Github repos. He observed at the time that I had been hitting the low-hanging fruit: i.e., the test cases dealing with good, expected data.

He was right. So I’ve gone back and updated my repos for the WordPress REST API tests to also include negative test cases, i.e., known bad data, to test the error behavior of the endpoints. So here’s what I focused on to do that.

First, several of the endpoints ask for IDs of this, that, and the other thing: post IDs, category IDs, etc. For those sorts of cases, I did these negative tests:

  • IDs that could be valid (i.e., were legit integers), but which did not actually exist as posts.
  • IDs that were specifically not valid, i.e., things that were not integers. E.g., “aaaaaaa”.
  • Using MAXINT as a post ID, just to use a GIGANTIC integer. Practically speaking I’d usually expect this to also be a “this post does not exist” scenario. But I’ve tested things in the past where throwing a thing a value slightly above a limit behaved, but throwing it a value WAY bigger than the limit did not. So I wanted to do this scenario too.
  • Also using MININT as a post ID. This is not only to test using a gigantic thing, but ALSO to test using a gigantic thing that could not actually be a valid post ID (i.e., because it’s a negative number).

Secondly, several of the rest of the endpoints didn’t use IDs, but rather, slug/tags. So for those endpoints, I did negative test cases that were very similar.

  • Slugs/tags that could be valid, i.e., strings, but which did not actually exist in the database (such as using “pancakes” for a category tag).
  • Slugs/tags that could not be valid. E.g., using “pancakes” with additional non-alphabetic characters.
  • Using MAXINT and MININT again.

In all of these cases, I threw the bad data at the endpoints and looked for specific error codes and error messages I was expecting to get in response. I also looked for 404 response codes.

I got the expected behavior by testing the various endpoints manually with bad data, in Postman, and seeing how they responded. That gave me the basic JSON response structure I would need to expect: that there would be both an error code and an error message, and an additional data object that would contain the response code within it. So my test cases looked for all three of these things.

As of this writing, I’ve implemented the negative test cases in Java, Python, and C#. Once I had them working in Java, it was reasonably easy to port them over to the other languages. In all three languages, this has now brought the test case count for the REST API suite from 20 up to 60.

In addition to learning the error behavior of the various endpoints, the most useful thing I’ve picked up on in this part of the project is how to get at MAXINT and MININT in the various languages.

In Java, it’s Integer.MAX_VALUE and Integer.MIN_VALUE. In Python, it’s sys.maxsize or -sys.maxsize–and this is Python 3, specifically. (Apparently, in Python 2, it was sys.maxint, but it changed; if I understand the change correctly, it was because Python 3 doesn’t have a hard and fast integer size limit, because it automatically recasts your number to a long on the fly if you go over the theoretical int limit.) And in C#, it’s int.MaxValue and int.MinValue.

Meanwhile, over in the Selenium test suites, there are fewer opportunities there for negative cases. But I did add some there that throw bad data at the search box, to verify that I get the appropriate messaging if there are no matching results. There’s some room for expansion there as well, to try to test the limits of what the search box can accept. So that’ll be the next place I add more negative cases.

Porting my projects to C#

Back in Ye Olden Times when I went to college, object-oriented programming was barely starting to become a thing. It didn’t help matters that the particular school I went to was behind the times on its computer science program, either. So I didn’t discover until I was actually in the workforce that of all the languages I’d learned about in college, the only one that was at all useful was C.

C# hadn’t been invented yet when I graduated. When I look it up, I see that Microsoft put out the first release of C# around 2000 (source: Wikipedia’s C# page), and by then, I was working at Attachmate. Attachmate was one of the last times I had any opportunity to work with C++, aside from one short contract I had after they laid me off.

Up till then, my only work at Microsoft (my full-time stint there in the early 90’s, and my contract work in the mid-90’s), was pre-C#. My next work at Microsoft, the two back-to-back contracts I had from 2003 to 2005, was straight up testing: a mix of manual testing and running automation written by the team’s actual employees. I didn’t have any opportunity to work with any language directly myself, much less C#.

After that, all of the full time gigs I’ve had were ones functioning outside the Microsoft-based ecosystem. Big Fish in particular was a Python and Java environment. Which has been great for my accrual of Java and Python experience!

But now that I’m back on the job market, I’m seeing regular signs of jobs asking for C# coming across my radar. So far, I’ve had to tell recruiters that I don’t have any professional experience working with the language, or for that matter with modern versions of Visual Studio. (The last time I would have seen any version of Visual Studio would have been that aforementioned short contract. According to my old resumes, round about then, I was mentioning working with Visual Studio 6.0 and Visual Studio .NET.)

I figured, though, that as long as I have time on my hands it would behoove me to try to actually get some hands-on experience with the current release of Visual Studio and with C#. And, since I’ve been told by former Big Fish colleagues that C# is very, very similar to Java overall, I figured I’d try to port my current Java work up on Github over into C#.

As of this writing, I have done this! The code I’ve ported so far is the little test suite that runs the REST API tests against my test WordPress site. This ported code now lives on my Github in its own repo.

Things I have learned from this experience

I do not like Visual Studio as much as I do IntelliJ for an IDE. But since part of this entire point was to practice getting hands-on experience with a current Visual Studio version, I put up with that. And it helped that I figured out a few ways to make Visual Studio less cluttered, by docking the things I need access to off on the side to be hidden until I need them. That frees up most of the screen on my dev laptop for me to see my code.

C# is, indeed, a LOT like Java. It’s simultaneously enough like Java and enough different from Java that the little differences will probably catch me up a lot if I wind up regularly working with this language. Nothing I couldn’t eventually get used to. But it’ll be a thing I’ll need to look out for.

I don’t like that Visual Studio makes me have to re-add a file in git every time I change it. (IntelliJ does not make me do this, even on Windows.) When I went googling about this, I found that this is apparently by design.

I do like the concept of a “namespace” that’s apparently a thing in C#. If I understand it correctly, the idea here is to have a big set of associated files. This seems to simplify matters a bit, and has saved me having to do a few extra import statements.

Oh wait, I’m sorry, not “import”. “Using”. See previous commentary re: differences between C# and Java.

Something else I’m still having to get used to, and this is a question of “quirks of the IDE” as opposed to “quirks of the language”: if I want to run tests in Visual Studio, I have to hit the Test Explorer for that. I can’t build the project directly, otherwise it’ll complain at me. This seems to be because I’ve built my various test classes as class libraries, as per various tutorials I’ve seen on how to set up tests. Which means in turn that Visual Studio won’t let me run them directly. Apparently you only get to run things in Visual Studio if you’re specifically building executables?

Lastly, a quibble of terminology: I don’t like that Visual Studio calls its projects “solutions”. It’s a bit too rah-rah YAY GO SOLVE YOUR PROBLEM for me. And I’m all “yes yes I’m going to STOP TRYING TOO HARD TO HELP ME”, here.

Which, I feel, exemplifies my experience with Microsoft products overall rather well.

Specific libraries I’ve discovered

A lot of this porting work has been “learn how to deal with the language” as well as “learn how to deal with the IDE”. But there’s also been some measure of “find C# equivalents of the technology I’m used to dealing with in Java”. One of these has been TestNG. The recommended C# version of this is apparently NUnit, which seems fine so far. Learning how to deal with this has been just a question of learning its specific syntax for annotating test methods.

Likewise, the current recommended way of doing REST API testing in C# seems to be RestSharp. From what I saw in my googling, apparently there’s now an in-language way of doing this, but I stuck with RestSharp as it seemed a good analog of the Unirest library I learned how to use on the Java side.

Lastly, since I needed to find an equivalent for how Java parses JSON, I went with LINQ to JSON in the JSON.NET/Newtonsoft library. This has been a pretty close match to the json.org libraries you can use in Java, and gives me the same level of ability to parse JSON payloads without having to worry about deserializing them.

(Which took me a bit of work to discover. RestSharp’s docs seemed to really really REALLY want me to deserialize JSON I get back on REST API calls. This is not helpful if all I really want to do is look into the JSON and go “yes, I see the correct value(s) in there”, as opposed to actually saving the stuff out into variables and doing additional things with it.)

What’s next

I also want to port the Selenium-based tests into C#. But since the Java version of these tests specifically uses Selenide as a framework to do its testing, I want to find a C# library that does the same thing. Atata seems promising and I shall investigate it.

Tutorials I’ve found so far for how to do Selenium-based testing in C# talk a lot about how to install Selenium and browser drivers into your Visual Studio. I don’t actually need to do that, given that I already have a Docker grid running. And since I already know how Selenium works, all I really need to know is how to get my C# code pointed at a Selenium grid.

Job descriptions that come across my radar also keep periodically mentioning Cucumber. This is a thing I discovered during my last research project at Big Fish, so I’ve been a bit interested in checking it out. I don’t know yet how well it plays with C#. This may also require investigation.

All in all

This has been a valuable experience so far. I can now say with assurance that if called upon to do so, I can in fact deal with both C# and Visual Studio.

Under no circumstances can I be called an expert, mind you. And I would still not be a good job match for any position that requires multiple years of experience working in the C# realm. But for any position with a bit more flexibility, where they’d be willing to go “oh sure, you have experience with Java and Python and have at least SEEN C#? We can work with this”? I could do that.

Also: given that I’m an amateur musician, I do have to say, I do like that the language is called C#. I play that note a lot on my fiddle and on my winds. I’ll take any little connection to music in this endeavor that I can get. :D

Coding projects update

As of this writing, I now have a total of five repositories on my Github account: the misc-configs repo for various config/supplementary files, and two each for Java and Python work. For each of those languages, I have a repo for the REST API portion of this project, and one for the Selenium.

All of the repos can be seen on my Github account.

What I’ve been calling the rough “phase 1” of this project is now more or less complete. I’ve got basic test cases in place in both languages for both the REST API side, and the Selenium side. As I’ve written about before, the API tests are dealing with the service endpoints that handle publicly viewable information. The Selenium tests are mostly oriented around testing parts of the homepage of my little test WordPress site.

Now I’m moving into the rough “phase 2”. In this phase, I’m adding more Selenium tests. This’ll include adding some sidebar tests for the homepage, as well as tests for additional sections of the site (a post and a page), and making sure that the elements are correct on the selected links. I’ll also be testing site search and adding a new comment to a previously existing post, since that’s something I can do without authentication.

“Phase 3” of this project will get into dealing with stuff that requires authentication. From the REST API side, this’ll mean dealing with the service endpoints that handle things at the site admin level (such as making a new post or comment, or editing a previously existing one). From the Selenium side, I’ll want to see about verifying logging in and logging out of the site, and making sure that the links displayed in the “META” area of the sidebar update themselves accordingly.

(NOTE: I am NOT going to try to test the actual WordPress admin UI. That’s a whole different kettle of fish than testing a front-facing site.)

In related news, I’ve also discovered the Githubs “Projects” functionality, and I’ve made myself a project there to cover the work I’m doing. This amuses me, as their Projects board looks a lot like JIRA, the bug tracking/project management software we used at my Former Day Job, as well as at the short contract I had after the layoff at the tail end of last year.

Interested parties can find my current active project on my Github projects page. I’ll be adding additional projects to that once this one is complete–like the WordPress plugin work I want to do!

I’ve actually had job recruiters and interviewers ask me about this work, now that I’ve got a link to my Github on my resume. This has proven beneficial in interviews I had last week, and I even got useful tips on additional libraries I can research, as well as aspects of version 8 of Java I hadn’t had experience with yet. I’ve gotten positive feedback about how I do comments on things, as well as on the various Readmes I’ve put on the repos.

So while the work hasn’t yet actually proven critical in landing me a job, it has proven useful in helping me demonstrate that I not only know how to code, but that I like it well enough to do it on my own time and to plan out larger projects.

This is, I feel, a very valuable thing for me to be able to demonstrate.