I am currently in Dublin on an EIT Health Summer school (my first Summer school ever!). This morning, it was Jan Gulliksen’s turn to hold a presentation for us, and what he said strongly resonated with me.
The topic of the presentation was “Why are you doing this?” – the “this” being, in this case, our PhD research. We actually started off by answering this question using Mentimeter. Three different options were available: saving the world, contributing to the creation of new knowledge and getting a PhD. As it turned out, a majority of the people present (about half) chose the creation of new knowledge as their main motivation; the remaining respondents were quite evenly distributed between “saving the world” and “getting a PhD”, though the former had the advantage (I was among those – could you tell?). Jan then moved on to talking about how we have not really managed to make the most of the possibilities offered by digitalization so far, especially within higher education. His point was that apart from the blackboard getting replaced by PowerPoint presentations, the way in which we are teaching students is not fundamentally different than what it was in the Antiquity. He made a similar observation in regard with the use of IT at work, showing side-to-side one picture of a work space taken about 25 years and one made recently (those computer screens certainly have slimmed down!).
This fixity in our way to teach and carry out of work tasks was suggested to be, at least in part, the result of our incapacity to translate research findings into concrete, significant change in the “real” world and thus to foster innovation through our research. (Interestingly, the numbers suggest that this problem is particuarly present in Europe, while it is much less of an issue in the US.)
If the mention of this problematic resonated so much with me, it is because as an HCI pracitioner, I am almost daily reminded of how little our research is being applied in practice: I encounter the same design faults over and over again in the computerized systems I use in my everyday life. For example, I recently had to contact the customer care center of an airline company because they had not sent to me any receipt for a seat reservation payment (how do you not automatically send a receipt?). On my way to Dublin from Uppsala last weekend, I needed four tries to buy my train ticket to the airport (why did I have to scan my card twice throughout the process, and why did the system close down when I withdrew my card too quickly?). After buying a film on a video-on-demand website a few days ago, I was made to click my way through another section of the site in order to stream the film I had just purchased (why couldn’t it start playing right away?).
Of course, the issues I have given as examples above are not big issues – but I would argue that it serves my point even more. Those design bugs could very easily be avoided or fixed. No extensive design experience or outstanding programming skills are required; it would just require some basic user-centered thinking. After all, the necessary knowledge (for example in the form of design heuristics) and techniques (such as quick-and-dirty usability testing methods) are readily available. They just need to be applied. So why aren’t they?
Based on my (limited) experience so far, part of the answer appears to be the limited outreach of the HCI perspective: outside of relatively few HCI practitioners and advocates, user-centered design principles do not seem to be resorted to in system development and optimization processes. During my Master’s studies, I worked with computer science students who simply discarded all my suggestions because the existing solution “fulfilled the requirements” (understand: offered the agreed upon functionality, even though not in a way that was optimal from a user perspective). A few months ago, I tried to convince a head of department to perform changes to the work system in use at his company in order to create a system flow that would fit better his employees’ actual workflow and work context- in vain.
Why is it so hard for HCI to have a real, large-scale impact on the systems that are being produced – both in the industry and within academia (because let’s face it, university websites and other university resources usually are good examples of what not to do)? How do you think we could bring this situation to evolve?
If you are an HCI practitioner yourself, have encountered similar difficulties as I in convincing decision-makers and stakeholders to implement your design-related suggestions for improvement? How have you gone about to try and “get your way” in spite of this resistance?