After learning about SEO in last class session, I was reflecting on the massive power and responsibility Google has to govern what we consume. Google search effectively creates rules that amass to a self-policing system, rolling out up to 1,000 updates to its algorithm every year.
These updates include "Panda," which screens for content quality, and "Penguin," which screens for link quality. These updates effectively punish sites that break the rules of the system.
But beyond this, they have made choices about what the rules should be. For example, they expanded mobile-friendliness as a ranking signal, meaning, they are choosing mobile-friendliness as a feature we should value when we're searching for content. Is this always accurate, and is it fair? Are there certain situations in which a page whose results live only on a desktop site should have gotten priority, but their lacking multiplatform approach left the searcher high and dry?
Which leads me to the question posited in the title of this article - what choices might Google make on our behalf in the future - for better or for worse?
I recently submitted a questionnaire on behalf of my company to rank us on an ESG index. One of the questions was concerning whether we used machine learning or AI to screen for bias before releasing marketing or advertising materials. Is this something Google is also being asked to do? Is it possible they're already doing it?
As pressure mounts on corporations to take an activist approach to their social policies, will there come a time when Google's users, it's shareholders, or our government, pushes it to adopt ranking policies that favor certain characteristics? I can imagine a world where we petition Google to prioritize pages whose gender representation in photography is respectful and balanced. I can also imagine a world where this is executed haphazardly: implementing rules that take a site's content at face value, rather than considering the contextual depth of its content.
No comments:
Post a Comment