Today 4Feb2023:Saturday

Going backward, I was home after midnight as Kathleen, Richard, and I chatted for a while after the last game. I will start with the fact that I did not win the two Wingspan games we played. In the second game, Richard included the Europe-Asian expansion, and we enjoyed the new options (my score was even lower)–recommended now by me. Having been sleeving the cards before we played, Richard had never opened the expansion before; he explained his choices for sleeves and how he watches for sales and just orders various practical sizes. I don’t sleeve all the cards to protect them in my games.

The board game Wingspan is an elegant game where you collect birds into your sanctuary and have little eggs, and you can build engines and react to other player’s actions. I find it a bit more random, but it is fun to play.

Next, we play the basic version of the hidden movement and searching game Mind MGMT. The game is played with one player as the recruiter, and the other players play agents trying to capture the recruiter. The agent players can ask some questions to try to locate the recruiter. I like this kind of game, but they are not engine-building or traitor games, so not every gamer like this style. Next, we played the training version–this kind of game has a lot of mechanics that take a few plays to get right. The Fury of Dracula, where one player plays Dracula, and the others are trying to hunt the vampire before he places more vampires, is another well-loved version that I have and have played quite a few times (now in its fourth edition that is only a few component changes (larger figures and bigger cards) than the third edition–I have the 3rd). It has multiple rule books and an obscure combat system but still a favorite. In comparison, Mind MGMT plays fast, and the rules are easy once you get used to the strange colors and words.

Before this, I was at The Lucky Labrador in Portland off of Hawthorn playing games and eating (and drinking) with Evan. We played Wingspan (my copy of the game and I won that one) and the locally created Vindication board game. I had a Czech beer and a Bacon, Lettuce, and Tomato (BLT) sandwich (the bread is artisan, and the bacon is hot, thick-cut–wonderful). Evan was happy to crush me in Vindication.

Before this, I was at Susie’s. Susie fell asleep in minutes today. She looked comfortable and seemed happy. But, when I went to leave at 2ish, she said, “You are going to leave me,” with a sad face. Susie wanted to go with us but knew it was impossible for some reason, which made her sad. I cried in the car driving to Portland. But, soon, Susie kissed me goodbye and was quickly fine.

We tried the new Peacock show, Poker Face, but Susie slept through most of it. Evan, who also showed up, slept through it. I liked it (and did not get sleepy). This is a remake of something like Rockford Files or Columbo, where the murder is shown, and now you watch as the detective works out how to catch the bad guy (some guest star). This version is a gal who knows if you are lying–she cannot be fooled even on the phone or on video. While in her thirties and set in the current times, she smokes and drinks, and swears like a sailor. I liked the mix and would recommend it.

We called Susie’s mother, Leta, and Susie and Leta had a friendly chat, but Susie nodded off a few times. Leta was happy that it was above zero (-18C) and was enjoying a lazy Saturday. We made it a short call as Susie had trouble paying attention and staying awake.

I have not mentioned driving; today, the traffic was heavy but always moving. The lane changing and merges were hair-raising a few times today. The mix of polite but crazy is not fun at speeds over fifty! On the way to Richards from The Lucky Labrador, I got on the back streets and found myself dodging a bus! In this case, I had started crossing four lanes, and then the bus started as I was halfway–there was no place to go but hit the pedal and get around the bus–I am not proud of that one. No paint was lost on Air Volvo today.

I started the morning working on my ship models. I put on the flags and names for the ships that fit some past gaming use and the current times: Unexpected and Unprecedented. I built the sales for the Unprecedented but then decided the space jammer would look better and be more useful as a gaming model without the masts. I attached the deck equipment (including a lovely brass wheel) and wrapped the edges in mahogany wood strips. I have kit bashed an HMS Victory wooden model kit of the bow (front) of the famous ship. This a cutaway. I bought the ruined and partially finished kit years ago on eBay, thinking I could try this–A new kit is $370. It is the nearly impossible Italian kit where you must cut out everything and do it perfectly to have the model come together. I paid much less than that, and it has been sitting in the garage for more than five years. Time to use. No regrets.

I wrote the Friday blog on Friday at Wildwood Taphouse. After finishing the blog, I had another beer and started on my Kaggle contest. Yes, beer helps my python programming. I reread all the directions with care. I read other folks coding examples (nobody was coding like me–interesting) in the articles (this contest has a cash reward for articles–the idea is that more articles then more contestants and better results with some knowledge sharing). I must admit that the double integration and power series math problems go back to the 1980s for me. However, I did see some ideas that I was wondering about, least-square (some custom implementations to match the problem), centroids, and other line-matching processes.

Kaggle contests are money for solving a machine learning (ML), artificial intelligence (AI) problem, or other math-centric problems for real-world problems. For example, I am working on ICE data from a neutrino detector, and the idea is to trace the detection back to the sky that sourced the particle. A very fuzzy and small set of data that simple (or complex) typical math solutions fail at is the world of approximating ML and AI detection. From my undergraduate 1980s AI work (I don’t get to say that often), I know that least square and other like approaches are overly sensitive to errors in the data–so it was nice to return to a problem I worked on once before.

My first approach on Kaggle code is to produce a result set that, while hard-coded to the wrong results, at least passes the initial test and correctly submits. Unfortunately, my code failed, and I had not returned to it, frustrated, for a week. As I had no working code, I also missed the chance to write an article. Very frustrated. The beer helped, it was my second one, and I noticed that nobody was reopening the results file and appending the results. Instead, they unloaded their results set directly from a dataframe. I tried to make each batch independent, but that was not what the other working examples were doing. I changed my code to accumulate the pseudo-results and write it out as a CSV with a header. Pass, and somehow I even scored higher (!) than other folks (266th best team). I am back in the game.

I hope to find some time one of these evenings to work on slightly improved results using least-squares and then adjusting with a Monte Carlo (random improvements). It is often a good idea to combine something like this to create a base and know that any cool ML or AI solution you write needs to be better than a stupid sled-hammer-like algorithm.

I managed to sleep and get going without issues on Saturday.

I am running out of time, so I will stop here: Thank you for reading!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s