- It’s now on the backend too
- There’s a ton of new/cool learning resources
At first I tried learning from a traditional book, from CodeAcademy, and by reading online. But I was only spending 30 minutes a day on it, and nothing stuck. It’s like my guitar teacher told me when I was 14: “When you start, you need to practice at least an hour a day to train your brain to think this way, or it’s not going to work.”
It feels great. My brain is so happy to learn something new and useful. After day 2 I was able to bang out a simple prototype of an algorithm that was too painful to do in Excel. I wish I had done this sooner.
I went looking to hire a programmer or a small firm to help me. I even wrote a simple spec to make the task official. But it turns out my task is not complicated enough to engage someone else. And yet it’s beyond my HTML/CSS skills.
When I was young I taught myself BASIC on my Sinclair 1000, optimizing code to fit in 1K of memory. I kept those skills fresh in high school and college and even did a bit of Pascal work in grad school.
But that was a long time ago. Since then I’ve been busy learning about networks, then design, then business, then being a husband and father. Just about every year I would wonder if I should re-learn programming, if only to prototype my interaction design ideas. This often took the form of, “Should this be the year I learn Flash?” The good news is I didn’t spend a lot of time developing deep Flash expertise.
Why is 2014 any different? A few reasons:
- There’s increasing discussion about the pros and cons of unicorns
- When trying to fill an interaction design position recently I interviewed two actual unicorns. They really exist!
- I love to learn, and I feel a little tapped out of new things to learn in UX. Not that I know it all, I certainly don’t, but there’s nothing so new it stimulates my brain like it used to.
- Programming is such an exciting field these days. Stuff like Github, node.js, Rails, and non-SQL databases make it possible to make new things in new ways.
 And all the developers I talk to want to differentiate based on services. I hear from a lot of people that want to brainstorm, to partner on ideas, to think about strategy and process, and to measure ROI. For once I’d love to hear someone say, “All we want to do is write solid code at a reasonable price.” return to text
…yes, that term sounds a little dumb, but it’s an idea I think will be important in the future. A deliberate spin on computer literacy, I think play will not only be important to designers to support creativity and innovation, it will be important simply to get along in an electronic world.
In the past I’ve argued that software tools will continue to be created much faster than we can possibly design quality interfaces for them. So consumers who can and are willing to play with a device to figure it out may be more successful.
I had this thought today while driving and navigating with the Google Maps iPhone app (I know, I know). The iPhone is slick, and the app is slick, but for so few functions it ain’t easy to use. But the slickness, the playfulness, of it all helps me overcome this. The desirability of the device and the experience make me want to overcome the usability. As designers, we can build playfulness in to help people, and, cynically, playfulness might be a sexier product development approach to sell than usability.
Update: Only tangentially related is how receive serious political ideas from comedians…
In one “astounding half-hour” of television, Stewart viewers saw “more trenchant talk of the financial crisis and the responsibility of the networks than you’d find on any news channel, all the more surprising in that it aired on Comedy Central.”
Not surprising, really, in that comedians like Lenny Bruce did this long ago. It’s just another place where we like to coat our serious work with a bit a humor and fun to make it palatable.
It’s not often we get to peek inside anyone’s concept design process, so this blog from IDEO has me starting up my reverse-engineering machine….
An open project between BugLabs and IDEO, this deep-dive exploration of the BUGbase UI is focused on re-envisioning the BUGbase interface with an eye toward integrating new display and input technologies.
The outcome of these explorations will feel less like a finished product and more like a concept car. And like any successful concept car, we hope these provocations will not only help us gauge users’ interests, but will spur constructive discourse and inform future design, engineering, and business decisions.
BugLabs’ commitment to openness presents a unique and exciting opportunity for us to be as inclusive about the design process as possible. For this quick two week collaboration, we will be conceptualizing new interface paradigms, designing new tangible user interface directions, and creating the associated industrial design/housing-modification solutions.
Bill Scott of Netflix, formerly of Yahoo, will be hosting the Future Practice webinar tomorrow, helping web designers learn how to create designs that are easier to implement by illustrating the UI engineer’s point of view. And you, my dear readers, get 20% if you enter the code VTWBNR when signing up.
I recorded a bit of our rehearsal with Bill. He’s killer smart — an O’Reilly author and frequent presenter — but has a great laid back style that’s such a pleasure to learn from. Here’s a ~7 minute edit:
I’ll get the plug part of this post out of the way right here: I worked on a preview of Bill Scott’s upcoming webinar on What Every Designer Should Know about Interface Engineering and I think it’s both very good and very important, and what’s more it’s a topic that I’ve never seen addressed outside project work.
I was reminded of this in a meeting the other day. We’re working through a Flex application that asynchronously queries the server with each criteria specified on a form, a little like what Kayak does, but with more data coming back from the server. Unfortunately the latency in that data transfer is simply too high, with too much customer time spent watching the spinning cursor. It’s unfortunate, because the design of the form itself (not mine) is clever, and I found myself thinking, “This is how Google would do it.” But the infrastructure in question is not as robust as Google’s infrastructure, and so the design needs to be modified to fit the latency of the system.
And this is the kind of situation that Bill refers to. When designers understand the limits and degree of difficulty in the technology needed to implement their designs, the designs are better.
In the evolution of programming languages, we’ve been moving to higher and higher levels of abstraction, for example from binary to assembly to C to scripting. Writing code gets easier, but the more generalized functions are balanced with less flexibility, which limits how much abstraction is practical.
Bill Scott’s Protoscript is a small but significant step in this evolution:
Protoscript is a simplified scripting language for creating Ajax style prototypes for the Web… I am a huge proponent of breaking down the barriers for the non-techies among us to be able to do what us techie geeks can do.
There are many AJAX frameworks out there, but Protoscript is designed to address a different and widespread need — those of us non-programmers who would like to make rich websites — without over-generalizing the code too much. It still involves looking at code, which I think will scare off many people, but it seems he’s thinking about how a graphical interface can control this.
For designers, it means you will soon be able to do more without relying on a developer, and developers can focus more on the backend systems. For everyone else, it means the web will be getting more interesting more quickly.
While we associate the older programming languages — and business models — with the old economy, John Soat writing in InformationWeek reminds us everything old is new again…
John Backus died. Backus, 82, was the originator of the Fortran computer programming language. Generally considered the first high-level language, Fortran was a lot easier to use than the machine code computer programmers had to wrestle with before Fortran came along. “Much of my work has come from being lazy,” Backus was quoted as saying. The logic of that statement is breathtaking, and makes him one of my personal heroes.
Fortran is still being used today, 53 years after its inception. And why did Fortran become such a widespread standard? Because it was free–as in free beer. IBM gave away the Fortran compiler with every IBM 704 mainframe computer, which was how software was distributed in those days. Free.
Headline: Killing Page View is Suicide Publishing experts have proclaimed that the death of the ‘page view’ is near. This consensus is heated by the adoption of a new Web development technique called AJAX, but AJAX holds hidden dangers for publishers.
Or, we could just use mod_rewrite to create unique URLs for each AJAX page. Problem solved. (Or, we could focus on possibly more useful metrics, like unique visitors, revenue generated, ads viewed…)
If Google has taught us anything, it’s that throwing a bunch of PhDs and engineers at hard technical problems can yield great results. The technical problems aren’t the hard ones, it’s the human problems that are hard.
So big companies are adopting social media. As they take a shortcut by buying software from BigCo-friendly vendors like Pluck, one has to wonder how long it’ll take before customers experience social media fatigue.
And, in the rush to install the software, I wonder if they even think about being in competition with Blogger, Flickr, and the other quite-good industry leaders? In the past I’ve seen many efforts to consider vertical search just to realize that everyone doing search is competing with Google. I wonder if the same realization will come with social media, or if social media will simply become the new paradigm for most media?
In the past I’ve observed that as processor speed increases, software replaces dedicated hardware. For example, in music or video production programs like GarageBand and Final Cut Pro and a stock Macintosh can replace dedicated rack systems and DSP chips.
Now with Web 2.0-ish advances on the Internet, we can go further and say as bandwidth increases, remote applications replace locally installed applications.
Yamaha has developed a beautiful prototype of a device that “allows everyone to play music intuitively.” But the simplicity of the user interface begs the question of why isn’t it implemented in software (i.e. why can’t I get my hands on this now?). I know the obvious answers, and I appreciate a great hardware UI and portability, but believe we’ll only gain more utility from network-based software applications as people adopt them. It makes even more sense when you see someone make something that looks similar and is a lot of fun, like Ollie Rankin’s Ten or Eleven (imagine this on a tablet PC).
For some reason I’m fascinated by programming language design. One reason is that innovation can happen at the tools level, and the tools that fuel software are undeniably important.
In the hands of a great author, writing on this topic weaves together the technical, the social and the personal forces at work. One of my favorites is Worse is Better, an analogy that applies beyond programming languages. Another is Paul Graham and his work on Arc.
In The Periodic Table, Primo Levi tells a story that happened when he was working in a varnish factory. He was a chemist, and he was fascinated by the fact that the varnish recipe included a raw onion. What could it be for? No one knew; it was just part of the recipe. So he investigated, and eventually discovered that they had started throwing the onion in years ago to test the temperature of the varnish: if it was hot enough, the onion would fry.
We’re going to try not to include any onions in Arc
An exciting part about Graham’s work is that he starts by admitting Unix/C has won. Then he proceeds to set his goal even higher. This is the spirit of design, of always turning whatever situation you have now into something even better.
Coding from Scratch: A Conversation with Virtual Reality Pioneer Jaron Lanier
…if you make a small change to a program, it can result in an enormous change in what the program does. If nature worked that way, the universe would crash all the time. Certainly there wouldn’t be any evolution or life. There’s something about the way complexity builds up in nature so that if you have a small change, it results in sufficiently small results; it’s possible to have incremental evolution…But in software, there’s a chaotic relationship between the source code…and the observed effects of programs…
What advice do you have for developers just starting out?
There’s a lot I would say. If you’re interested in user interfaces, there’s a wonderful opportunity these days to push what a user interface can be. If a user interface gives a user some degree of power, try to figure out if you can give the user more power, while still keeping it inspiring and easy to use. Can you do it? For instance, could you design a search engine that would encourage people to do more complex searches than they can do on a service like Google today, but still do them easily? I haven’t seen a really good visual interface, for instance, for setting up searches on Google. Could you do that? Could you suddenly make masses of people do much more specific and effective searches than they currently are doing just by making a better user interface?
Chimera 0.4 is a browser for MacOS X built on top of Mozilla…
The cross-platform UI will be replaced with native Cocoa widgetry (such as customizable toolbars and a drawer for the sidebar). The plan is to produce only a browser (no other apps!), and to keep the UI as simple and as clean as possible.
I’ve always thought open source development doesn’t lend itself to quality UI design. But in this case they can draw from 1) years of web browser conventions, and 2) the MacOS X human interface guidelines. I’m looking forward to the results.