Blog

#development

#web-industry

Clouds, and VPS’s before that, work on the age-old principle of buying in bulk and selling by the piece. You run one big server for $1,000/month, then you rent it out to seven people for $200/month, and voila, you’ve cleared a $400/month profit.

DHH, Don’t be fooled by serverless

This is something that has bothered me for some time. The prices of these services when compared to full servers is absurd. Hear that whole sentence. Compared to full servers. The response of many would be either a) “the prices are very low!” or b) “no one needs a full server!” The whole sentence matters. If you need a full server, the price is very high to rent it piecemeal. If you are going to eat a whole cow in a year, buying it pound by pound from the grocery store will cost substantially more than buying a whole cow.

But what happens if a customer needs the performance of a whole box, most of the time? Then they’re paying $1,400/month for $1,000’s worth of computing. Or maybe, because they’re reserving the whole box, they’ll get a deal at $1,250/month by committing to a whole year. That deal is far less obviously good on both sides. It’s basically a credit agreement at a 25% APR. Tread wisely!

But if you execute enough functions to fill the computing power of a whole box, it’s a terrible deal.

And then there is the lock-in. If you build an application in PHP or Ruby, you can basically run that anywhere. These cloud services are designed so you have to architect around them. If the pricing of my server for this site goes too high, I can take it elsewhere. It’s just HTML.

The further down the rabbit hole you go with “cloud-native” services in serverless, the harder it’ll be to climb out when you realize that you should own the donkey rather than rent it. And especially once you realize that paying to rent a whole donkey at the piece price of a hundred slices is an even worse deal than just renting the whole donkey by itself! […] And if you start off with a proprietary serverless setup, you might well find the lock-in impossible to escape by the time the rental math no longer works.

So who are these services best suited for?

The cloud is primarily for companies that have big swings in use – like Amazon’s original AWS case of huge demand around Black Friday and Christmas, which left them with unused capacity for the rest of the year – or for early outfits that don’t do enough business to either warrant owning a whole computer or spend so little on the cloud that it just doesn’t matter.

If you’re a conservative in tech, it is next to impossible to live in an echo chamber.

Over the last week, my Twitter timeline exploded with rage over the two big GOP topics of the week: net neutrality and the big tax bill. If I was to believe my timeline, these two items were doomsday-grade events. There was no way that a human could stand by this massive tax bill or against net neutrality. Not a single dissenting voice.

Sometimes this is a real sign that something is truly important and should be stood for or against. Sometimes this is a sign of a lack of diversity in thought or ideology in your news feed.

Seeing all these voices — voices that I greatly respect — freaking out over the tax bill, I started to wonder if the GOP had truly jumped the shark. So I went to a few major news sources and a few minor ones. I sought answers and tried to understand the facts from my worldview. And found myself agreeing with the majority of the included items, while also understanding how others would see these as bad.

Echo chambers exist when we surround ourselves with a homogeneous group of voices. As a conservative, it is next to impossible to live in an echo chamber. If you want to follow anyone in tech that matters, you’re going to hear opposing views on political matters.

What are you doing to stay away from echo chambers?

I don’t want to be the neophobe in the room but I sometimes wonder if we’re living in a collective delusion that the current toolchain is great when it’s really just morbidly complex. More JavaScript to fix JavaScript concerns the hell out of me.

Dave Rupert

I’ve been feeling this hard lately. When we are talking something like WordPress, we care about a few metrics. Page load speed and size being the primary. Memory usage and performance on server matter, but often — sadly – much less.

However, when we are talking the client, the browser, much more should matter.

I am responsible for the code that goes into the machine, I do not want to shirk the responsibility of what comes out. Blind faith in tools to fix our problems is a risky choice. Maybe “risky” is the wrong word, but it certainly seems that we move the cost of our compromises to the client and we, speaking from personal experience, rarely inspect the results.

Yeah, we also rarely analyze the browser memory usage or repaint counts of our pages. I had my laptop fan turn in this morning as I quickly opened a half a dozen tabs from ComicBook.com and they all auto-loaded dozens of trackers and started playing video. Each tab. Safari instantly ran up gigs of memory and my CPU hated me.

But this is modern web development. Who gives a shit anymore?

I tried to find a couple of quotes from this article, but I think I need this entire section to sum up where I am as a web developer.

Many web developers have “moved on” from a progressive-enhancement-focused practice that designs web content and web experiences in such a way as to ensure that they are available to all people, regardless of personal ability or the browser or device they use.

Indeed, with more and more new developers entering the profession each day, it’s safe to say that many have never even heard of progressive enhancement and accessible, standards-based design.

For many developers—newcomer and seasoned pro alike—web development is about chasing the edge. The exciting stuff is mainly being done on frameworks that not only use, but in many cases actually require JavaScript.

The trouble with this top-down approach is threefold:

Firstly, many new developers will build powerful portfolios by mastering tools whose functioning and implications they may not fully understand. Their work may be inaccessible to people and devices, and they may not know it—or know how to go under the hood and fix it. (It may also be slow and bloated, and they may not know how to fix that either.) The impressive portfolios of these builders of inaccessible sites will get them hired and promoted to positions of power, where they train other developers to use frameworks to build impressive but inaccessible sites.

Only developers who understand and value accessibility, and can write their own code, will bother learning the equally exciting, equally edgy, equally new standards (like CSS Grid Layout) that enable us to design lean, accessible, forward-compatible, future-friendly web experiences. Fewer and fewer will do so.

Secondly, since companies rely on their senior developers to tell them what kinds of digital experiences to create, as the web-standards-based approach fades from memory, and frameworks eat the universe, more and more organizations will be advised by framework- and Javascript-oriented developers.

Thirdly, and as a result of the first and second points, more and more web experiences every day are being created that are simply not accessible to people with disabilities (or with the “wrong” phone or browser or device), and this will increase as standards-focused professionals retire or are phased out of the work force, superseded by frameworkistas.

Zeldman

I’ve personally been building websites since 2001. The web standards movement was just beginning. In 2004, I was part of a state web development competition through my high school and they required our sites be built in XHTML and CSS. I felt the push back against ugly, inaccessible plugins— like Flash— and bad JavaScript practices from the start. Fast forward seven years and I led the charge for responsive web design at a Chicago-based agency, declaring that we shouldn’t upcharge our clients for something that is absolutely necessary. That was before we reached 50/50 desktop-to-mobile traffic.

Progressively enhanced, responsive, and accessible websites are in my blood. And that’s why it pains me so much to be rehashing conversations from the start of the standards movement as to why we shouldn’t require JavaScript, or assume that our user’s device supports fill-in-the-blank, or even assume our users can see like we do. And I’m having to rehash these conversations regularly with the Angular and React JavaScript frameworkistas.

We fought this fight for a reason and it matters today more than ever. Tomorrow is Blue Beanie Day. I’m old enough to remember why. I will be wearing one to stand for accessibility and progressive enhancement. I hope you do too.

Development shops are relying on the communications team at a finance agency to know that they should request their code be optimized for performance or accessibility. I’m going to go out on a limb here and say that shouldn’t be the client’s job. We’re the experts; we understand web strategy and best practices—and it’s time we act like it. It’s time for us to stop talking about each of these principles in a blue-sky way and start implementing them as our core practices. Every time. By default.

A List Apart

I’ve been in the web industry for 15 years, grew into my own during the web standards revolution, and have a huge heart for a11y issues. Seeing our industry revert to, in many ways, the methods and practices from before the standards movement is disheartening at best. We need, now and always, to insist on core development principles.

At some point, the difference vanishes. Most people never did “real work”, by whatever metric, on their computer; they were happy to browse web pages, send emails, Skype friends, whatever. Yet the redoubt of “real work” is defended valiantly, perhaps by those whose jobs depend not on the work, but on the tools used for it – the PC. It’s very notable how often those defending the “real work” divide are also systems administrators of some sort. It’s as if, like the London cabbie, they felt their employment was in peril, while everyone else adapts around them.

For myself, I ask “What do I need to be able to do my job?” LAMP environment? I set up a Digital Ocean droplet that I can SSH/SFTP into. Sass and Grunt? All set up on the droplet. FTP client and code editor? Coda for iPad is fantastic. But I’m a front-end developer, so the browser is a key tool in my toolbox. I need a web inspector to see what styles are applied to an element. I need a way to test responsive websites across many sizes. I need a JavaScript console to look for errors and help with debugging. There are a few apps for viewing the source of a page, but that doesn’t quite scratch my itch. There are a few apps with a simple console, but none of them really work well with the iPad’s big screen. They all seem built for iPhone and enbiggened for iPad.

So what is a front-end web developer to do? Before Thanksgiving I started doing a lot of research and over Thanksgiving weekend (which was nice and extended for me) I started to build something special.

I call it Web Tools. Keep it simple, right? To start (1.0), Web Tools has a scalable web view that allows you to test any width you want and a web inspector to allow you to easily drill down through the DOM tree and see what styles are applied to each element. And this is just the start. More great features are coming to Web Tools in the coming months, including a powerful JavaScript console.

Building websites on the iPad, even an iPad mini like mine, is a joy when you have the right tools. So I am working to bring desktop-level tools to the iPad to remove excuses. As Twitter says, it’s the #yearofticci.

Web Tools launched today and can be had for a $5.99 introductory price. Head over to the App Store andbuy a copy!

Looks like Adobe has been paying attention. Artboards, simplified tools and editing, and much more. I’m interested to see where this goes, though apps like Macaw seem much more friendly to the modern, responsive web.

Of course, the web development model also has its own set of challenges. In particular, there is a huge over-indulgence in trackers today, and this can wildly impact responsiveness. If you run a plugin like Ghostery for a while, you’ll quickly learn just HOW prevalent add-ins like this are. In a quick tour around common news sites, for example, I found the AVERAGE number of external tracking libraries being loaded to be more than twenty.

In Progress

Yup. When I worked at Abt Electronics, I was abhorred by the requests to add a new “tracking pixel” every few weeks. We had dozens of them on our site. A simple look under the hood showed quite clearly that the performance was hurt severely by these tracking scripts. Bringing this to the attention of an apathetic employer made me realize how bad the problem is. Marketing wants to track visitors and will not listen to developers that these things hurt the visitors they want to track. This is even more an issue with mobile networks that, even with 4G LTE, are significantly slower and more lossy than broadband.

If we simplified our webpages, stopped trying to emulate native, and stopped bloating them up with unneeded network requests, we’d have a much faster web experience than native (when considering finding and downloading of a native app).

Instead of a clear set of rules moving forward, with a broad set of agreement behind them, we once again face the uncertainty of litigation, and the very real potential of having to start over – again – in the future. Partisan decisions taken on 3-2 votes can be undone on similarly partisan 3-2 votes only two years hence. And FCC decisions made without clear authorization by Congress (and who can honestly argue Congress intended this?) can be undone quickly by Congress or the courts. This may suit partisans who lust for issues of political division, but it isn’t healthy for the Internet ecosystem, for the economy, or for our political system. And, followed to its logical conclusion, this will do long-term damage to the FCC as well.

AT&T

A very well-reasoned response to the FCC’s potentially historic vote yesterday from AT&T. This was no unanimous vote.

When you build a website with traditional standard DOM techniques, you get accessibility “for free” more or less, and this is without question a good thing. I’ve been a proponent of accessibility for as long as I can remember. It does not follow, however, that what Flipboard chose to do is wrong.

It is true that Flipboard’s engineering decisions prioritize animation and scrolling performance above accessibility. That’s no secret — the title of their how-we-build-this post was “60 FPS on the Mobile Web”. It does not mean they don’t care about accessibility. My understanding is that accessibility is coming — they’re working on it, but it isn’t ready yet.

When titans clash. I love that we are, as a web community, getting back to writing instead of quick messages on Twitter. The thought that both Faruk Ateş and John Gruber have put down in words is great and the conversation is what makes the web such a great platform.

Last week, I shared a blog post from Flipboard’s engineering team about their new, mind-blowing website. Not only did they create a great desktop version of their service, but they built a stunning, Canvas-driven mobile version. Their whole reason for not using the DOM (HTML and CSS) was for the sake of visual performance and user experience. 60 frames per second. In iOS, we call this butter. It isn’t something iOS developers sacrifice. It’s the expected norm. In fact, it is by and large why HTML5-driven, app-wrapped apps are so bad. With the web stack, smooth animation doesn’t happen. When a scroll view doesn’t scroll like butter, it looks jumpy. Jumpy looks cheap. Facebook suffered from this. So they went native. Basecamp suffered from this, so they are slowly going native. My bank suffers from this but they continue using Phone Gap and it is the worst app that I am forced to use.

As Gruber points out:

Blinded by ideology, oblivious to the practical concerns of 60-FPS-or-bust-minded developers and designers, the W3C has allowed standard DOM development to fall into seemingly permanent second-class status.

The DOM was never built for what we are doing with the web today. But Faruk is right about the DOM:

[The DOM makes] content easily accessible to anyone, anywhere, anytime, using any device

This is what is difficult. If you want to build a great product, some sacrifices must be made. Gruber says so perfectly that “shipping is a feature.” Ship early, ship frequently is oft the motto of web companies, and one I agree with. Flipboard chose to use new technology that fits their product and by doing so increased the amount of effort needed to pull it off. If Gruber is right, that Flipboard is working hard on accessibility for their new web service, then I don’t see much of a problem here. Before last week, Flipboard was only available as an app. Last week that changed. Those that need accessibility will have to wait a bit longer.

But like Gruber and Faruk, I care highly for accessibility. This is why I took the opportunity with Ashes a couple years ago to involve a blind man in testing the app. With his help, most of the app is fully accessible through VoiceOver. Subsequently, Ashes was featured on a number of podcasts because of it’s accessibility. Making it accessible was an afterthought. Design and, as Faruk says, flashiness were crucial to making the app what it was. The design sold it. But accessibility came soon too.

I think in a year we’ll look back at the amazing, open-source library that Flipboard will have released for making Canvas accessible and forget that for a few months we had a beautiful, cool app that was inaccessible. But in the meantime, the conversation will continue.