It’s been a couple of weeks since WWDC
wrapped up and there’s a couple of things that stuck out
that I found interesting.
All the iPad Stuff
I’ve written a bunch about the iPad.
I really love and get a ton of use out of my iPad, to
the point that it’s my travel machine. The limitation
has always been that you can’t get “real” work done
without jumping through hoops. Real(er) multitasking, drag
and drop, and the new iPad Pro are all steps towards
the iPad’s ultimate destiny: a daily computer for the
vast majority of people. I could probably do 90% of my
work off of an iPad at this point, but it’s a bit
painful. With iOS 11, it looks like it will be a lot
more comfortable.
iOS 11 and High Sierra (and Siri)
Setting aside the dumb name of High Sierra, both new
OSes seem like reasonable advances. There’s a few
little things in each (new Control Center, APFS, all
the machine learning libraries) are really nice
evolutions from iOS 10.
The small improvements to Siri (although the Omnifocus
integrations are going to be awesome) are worrying.
I actually think Siri is decent (and has gotten
considerably better in the past 18 months), but there’s
still far too much that can’t be done with Siri.
Apple needs to find a way to advance the ball faster.1
HomePod
Once again, setting aside the dumb name2 I’m
cautiously optimistic about the HomePod. We have a
handful of Sonos speakers in our living room, and
we love them. I use them all the time. However,
the lack of voice control and native Airplay3means they take a bit more thinking to use. So,
even though we have these big, powerful speakers
sitting there, my wife uses her phone to listen to
music.
It drives me batty.
So, a HomePod, with Airplay (well, Airplay 2), that
my wife can tell to play whatever music she wants,
plus HomeKit integration (we’ve got a bunch of HomeKit
devices), and some Siri integration, works for us.
We have an Echo Dot, which is handy for things like
checking the weather and random facts, but we don’t
use it to do much more than that. The HomePod should
easily be able to replace that, plus native integrations
with calendars, reminders, etc., probably fit in our
life better than the Alexa device does.
The downside: it’s expensive. Like more expensive than
getting another Sonos speaker expensive. Won’t have more
than one in the house expensive.
The price will probably come down over time, and it’s
capabilities will get better (presumably), so I’m hopeful
this is the smart speaker that will fit best into our home.
Friday was my last day (by choice, for what it’s worth) at the job I spent almost
12 years at. At some point, I’ll write more about it. At the moment, I’m just going
to talk about the fact that I left my job so that I could be around when our babies
(yes, plural) are born. I loved my job and devoted a lot of time and energy to it.
Taking a break to prepare, help my wife, and be around for the babies was one of the
easiest decisions I’ve ever made.
While prepping for the babies, I’m also hoping to spend some time re-learning Ruby (and Rails) and learning Swift. So, over the next few months, expect a random smattering of thoughts around infants (turns out twins are more complicated than having a singleton, like the fact you start using the word singleton in a non-computer science context) and programming, particularly Swift.[^swift-0-4ab44988af578fabc53b06d67136149e]
[^swift-0-4ab44988af578fabc53b06d67136149e]: I’m particularly curious about the new machine learning libraries in Swift.
I’d been meaning to play around with Jekyll for a while.
When I started this site in 2004, I built it on Blogger. Sometime in 2007, I think,
I moved it to Wordpress. This site has existed on Wordpress in some way for the
last decade.
Wordpress has been a great tool, and has evolved considerably, but it’s always been something that I’ve needed to pay attention to from a security and performance perspective. I’ve also taken to writing more markdown, which works with Wordpress, but not as easily as with some of the various static site generators (like Jekyll).
It took a couple of days to get everything ported over. I started with the Wordpress.com importer, used that to bootstrap the site, and then spent a bunch of time figuring out how to get started.
First up was a theme. I screwed up a couple of times here—this is something Jekyll does that’s a bit more complicated than I would have expected. Basically, unless your theme is in a Ruby gem, you basically drop it over your existing settings. It took me a bit to get everything figured out, but I used the Hyde theme as the base for my setup and did a bit of tweaking to get the look right.
From there, I started to play with a few different things. By default, Jekyll doesn’t have the ability to group pages/posts in an archive, but it does have a nice plugin to take care of it. With a little configuration, I had setup an Archives page that has my categories and monthly archives.
I’ve been trying to get footnotes to work nicer than they had previously. Again, there’s a nice plugin for that that will work on all my posts moving forward. 1
Once I got all this working, I started playing with how to do deploys. I’m using git, running on my server, to do deploys. It’s using an adapted version of the method found here. I push my site live, and it builds it automatically.
That whole thing works awesomely, except that my builds were taking almost a minute, which is far too long. I used the jekyll profiler to find out it was spending a lot of time in the sidebar. Basically, it was looping through every single post to figure out that I have a couple of them that are defined as pages.
sped up my deploys by 75% (from 60 seconds down to 15).
I also made link posts (like this one) a little easier to discern. It uses this character—⚐—to highlight that it’s a link post and give you a way to access the permalink (the little flag).
The site is a lot faster, has no active dependencies, and could be picked up and redployed almost anywhere with little effort.
I haven’t ported over my comments yet (that’s next). I’m not sure what I’m going to do, whether bring them over as static content and not offer comments moving forward, or import them into some system, but that’ll be a project for another day.
If you notice anything out of sorts, broken, or otherwise gummed up, let me know.
See how nice this is? Also, at some point I’ll try to fix up the old footnotes. But this is good for now. ↩
I thought this presentation by Dan McKinley was really interesting and resonated heavily with my experience in helping to shepherd an organization that was pendulum swinging from everybody hacking production, to nobody getting to do releases until you filled out a form in triplicate, to an org that was doing 8–10 releases on most days.
We never got to continuous delivery (CD), for a bunch of reasons, but mostly:
Cultural (it scares the crap out of the systems and support teams, even if it might be better for them)
Technical (it requires good tests and good dev/beta systems, and we’ve always been underinvested in the resources to help there)
Organizational (we’ve rarely settled into a structure that allowed our teams to develop the discipline)
But we did continually get better, and I’m guessing in another year or so, with the right people pushing, I don’t think a real CI (continuous integration)/CD pipeline is unreachable.
Some bits from the presentation that were particularly resonant with me …
Namely, we had a lot of process that was prophylactic. It was built with the intent of finding production problems before product
As your organization gets bigger (and not even, like, really big, but just bigger), there are lots of people who think their job is to protect the production org by creating all sorts of process to make it really hard to get something to production. In reality, all that process just makes people pay less attention, not more attention. There’s always somebody else who is more responsible for the code going live, being tested, being right. The further away you are from being on the hook, it’s natural that you pay less attention.
Which is why, smaller, more frequent releases, with less friction and less overhead, makes a lot of sense. It’s your responsibility to make sure you don’t break production, and if you’re going to be responsible, don’t you want to make smaller bets? That leads to this tenet …
Deploying code in smaller and smaller pieces is another way. In abstract, every single line of code you deploy has some probability of breaking the site. So if you deploy a lot of lines of code at once, you’re just going break the site.
And you stand a better chance of inspecting code for correctness the less of it there is.
There’s a lot of goodness in this presentation, resulting from the scars of helping to drag an engineering team into something that works, that has buy in, and increases the velocity and performance of the team (and helps keep everybody happy because they’re working on stuff that actually gets to production). There’s some bits towards the end of the presentation that make sense for one big team, but less sense for multiple teams. Multiple teams is a huge way to help solve this problem. If you can break up your application into smaller, separate applications, or services, or microservices, or trendy term du jour, then you can reduce your dependencies between teams.
That lets each team reduce it’s risk and some teams can ship 50 times a day, and some 10, and some 2. It increases a bit of coordination between teams, but with good documentation and smart API design (ideally with good versioning so that team releases don’t have to be coupled), you can get to a point where teams can all be really efficient and not beholden to the slowest of teams.
Anyway, it’s a long presentation, but I think it’s a really great, real world example of how to get a big challenging org into CD (or at least on the path to it).
The JSON Feed format is a pragmatic syndication format, like RSS and Atom, but with one big difference: it’s JSON instead of XML.
For most developers, JSON is far easier to read and write than XML. Developers may groan at picking up an XML parser, but decoding JSON is often just a single line of code.
This is such a good, simple idea. In general, I hate dealing with XML (I actively bias against SOAP interfaces too). JSON isn’t more verbose than XML, but is decidedly easier to read, and far less fragile. I’ve added JSON feed to this very site.
It was the American team’s first loss in a pre-Olympic exhibition since May 3, 1996. During that span, the U.S. team outscored opponents 1,475–24.
Angela Tincher is one of VT’s greatest athletes (and she should have been pitching for Team USA …). I remember following the team intensely that season.
Random things that have been collecting in my brain the last few weeks:
The last time I headed abroad, I realized AT&T had finally caught up to the competition and offered a reasonable international plan (use your data, $10/day).
I also realized I didn’t want to waste my data on stupid things I could wait to pull over wifi, so I made it so a bunch of apps could only update over wifi (most notably, Facebook). A couple of months later, when only using Facebook over wifi, and only using it sparingly (I’m ready to be done with Facebook), I noticed I’m using about half as much cell data as I was before. Thanks, Facebook, for preloading all your shit content, and for your huge app updates.[1]
I traded in my hybrid for an electric Chevy Bolt. It’s been a pretty interesting experience (more on that in the future), but I did find one odd bug: plug in your iPhone (after using Carplay) before the car is turned on, and it doesn’t seem to be able to boot the Infotainment system. Unplug the phone, and life is back to normal (and then Carplay is usable again when you plug it back in).
I’ve enjoyed listening to Crimetown, one of Gimlet’s many podcasts, but their production schedule just destroys my ability to remember what’s going on. Same goes for StartUp. Anything that’s sort of serialized just gets trounced by the seemingly random release schedule. I think the all-at-once-model (like for S-Town) is much better for stories that are serial. Or, at least be ready to release it every week.[2] I probably should have just saved up all of Crimetown and binged it.
The barrage of notifications for calendar invites that you’ve seen and dealt with on other devices when you unlock your phone for the first time in a while is so horribly annoying. It’s caused me to inadvertently decline invites when I’m trying to swipe the notification away.
The calendar knows I accepted the invite. Why is it giving me this blast of prompts? I think this started in iOS 10, but I hate it.
I was particularly saddened to see Rep. Massie on the list of those voting for this measure. Having worked for him (years ago), he is certainly smart enough to understand the technical implications here, but voted out of the idea that the free market was already doing a good enough job of this (i.e. Comcast won’t sell your data without your permission, for fear that you’ll leave for a competitor).
The problem is that, in great portions of this country, there’s no free market for ISPs. In most locations, it’s a local monopoly. I’m lucky: in my city, we have two cable providers, plus high speed fiber (fios). In the town I grew up in? One cable provider. And then DSL, if you live in the right spot. The house I grew up in? No DSL. No options.
Anyway, use a VPN. Most sites are using HTTPS these days, which is helpful, but your ISP will still know what name you looked up, what IP came back, and how long you were on the site. If you want to be careful, switch to an open DNS provider, and use a VPN. Most DNS providers will also use your data, but they will at least give you the option to opt-out. (As backwards as this sounds, I’d recommend Google Public DNS).[1]
For VPN, both Cloak and TunnelBear are reasonably cheap (probably less than you pay for 1 month of internet) and easy. Or, if you’re so inclined, roll your own.
Google’s DNS privacy is pretty clear—“We don’t correlate or combine information from our temporary or permanent logs with any personal information that you have provided Google for other services.” ↩