If you want to derail a UX team’s productivity for 20 minutes, mention ghost buttons and back away slowly. Opinions will be shared, like...
Ghost buttons don’t follow traditional usability conventions as they have limited affordance that they are ‘clickable’ — users won’t click them and conversion will drop!
No, users don’t need affordances anymore, they know how to use the internet old man!
Links to articles will be provided as ‘evidence’ of where it does / doesn’t work. We’ll debate aesthetics vs usability.
Ghost buttons are often touted as low-affordance, and it’s true of many of the examples that we share — buttons with poor contrast placed over images making them difficult to use (such as here). This confounds the tests (beautifully explained here) — we say we are comparing ghost buttons and ‘normal’ buttons but we are really testing accessible vs inaccessible buttons, high vs low contrast designs, high affordance vs low affordance designs (and it’s not a surprise that ghost buttons lose).
How would a ghost button compare against a solid button when both are accessible?
We tested 4 buttons on a call to action (CTA) to write a review about your company: the primary SEEK pink button, our secondary blue button, our secondary grey button and we snuck in the ghost button experiment to put the debate to rest (or so we thought).
Our first test was a 4-way split between all the colours. Our concern that grey would underperform, as it looks ‘disabled’, was proven correct as it saw a drop in conversion of 8.6% from the control. Surprisingly the ghost button performed better than grey and pink, and similar to the blue button. After 10 days we had not reached statistical significance as blue and ghost kept flipping between first and second place, but they were clearly ahead of the other variants.
Test 2 only compared blue and ghost and ran for 10 days. At the end of the 10 days the buttons were performing identically. When looking at end to end conversion (actually completing a review) blue was slightly ahead of ghost, but not in a statistically significant way. The ghost buttons performed as well as our secondary blue button, but no better. Therefore, there was no case for the challenger (ghost) to replace the control (blue).
What does this mean for ghost buttons?
While the ghost button performed well, we’re not making a site wide change based on this one test. This test was on secondary actions that a user can take on these pages, and there is still a concern that ghost buttons will not give enough visual weight to the primary action on a page (the one thing we really want them to do). Typically, we reserve SEEK pink for the primary CTA on a page as multiple strong pink buttons compete with each other and dilute the CTAs. Multiple strong calls to actions is lazy design, it splits attention and puts the cognitive load on the user to figure out what to do next, when we as designers should be guiding them.
These results don’t mean that ghost buttons will work everywhere, or that we should be replacing our strong pink primary actions with ghost buttons. It would be interesting to see whether the buttons would perform as a primary CTA within our apps… watch this space for more tests.
Overall, I have to admit, I was surprised. I was sure the ghost button would perform worst, or perhaps only better than grey. It just goes to show the answer to any UX question, no matter how much we think we know, is it depends and to test it with real users. That UX is like quantum physics — your interface is both excellent and horrible, usable and unusable, at the same time. It is not known which until it has been tested with real users and observed.