Are voice assistants perpetuating our biases?

Will AI & systems always have bad tendencies due to the data models that are being trained and the inputs given? It's a company's responsibility to filter bad data, or at the least limit and reduce the results given that may perpetuate or encourage bad output. Alongside this, it is also important to determine if the voice assistant can detect and understand nuances in speech, accents and spoken language variations. The futures of tech companies is to continue to invest in ethical data collection.

"Diversify the data you train these systems on, and the systems may become more open to diversity themselves." - Allison Koenecke, Standford University Graduate & author at Inverse

Resources: Inverse x Voice Assistant biases

Voice Assistants & Privacy Concerns