Understanding the Nature of Sound
This article is part of a series on technology. The author previously wrote about smartphones. In the next piece, he will explore the technology of image display.
The smartphone has only been around for less than two decades but it incorporates technologies that provide data to three of our senses. This is very rare for a single device to do. One of the senses that interact with smartphones is hearing.
In fact, the word telephone derives from Greek. ‘Tele' means far and ‘phone’ means voice — together, they mean ‘distant voice’. Communication is vital to our civilisation, so we chose a talking machine to host other technologies; such as the gyroscope, the thermometer, the internet, and many more.
If the telephone is the bottle that carries our vocal interactions, then the juice is the voice or, more generally, the sound.
A voice can be classified as a sound and a form of energy that vibrates through the air. The number of vibrations it produces per second are called the frequency. A sound in a range of frequencies between twenty Hertz (Hz) to twenty Kilohertz (kHz) is a part of what is called audible frequencies. Any sound outside that range is imperceptible by the human hearing. But some of the members of the animal kingdom, like dogs, bats or dolphins, can detect a wider range of frequencies than humans. This means that we only experience a small portion of reality. There are so many phenomena out there, most of which we have no idea; and that’s what science continues to investigate.
Back in the days, it would take generations to come up with a technological innovation. But today we are able to double the computer performance (processing power) every six month, thanks to Moore's law. On the other hand, miniaturisation allows technologies to be packed in smartphones. Among these technologies is sound.
The loud-speaker used to compete against the power transformer unit as the biggest components in early electronic devices. Now the transformer has been pushed out as an external part, commonly called the Power Adapter, and the loud speaker has significantly reduced in size — enough to remain inside most devices with an audio system.
For a sound to be generated, it requires energy to be transformed and released in a conducting medium like the air or water. Different objects need to interact or collide to emit sound, and the sound emitted will depend on the molecular structures of the interacting objects or instruments; their sizes and shapes, textures, the nature of space or medium in which the interaction happens, and so many other factors.
Consider a more familiar example where a person beating a traditional drum causes vibrations on the old dry tense animal skin attached to an enclosed wooden chamber. The chamber amplifies the skin vibrations and this is how the sound of the drum can then propagate through distances, setting itself as a major primitive distant communication tool for millennials.
Due to its propagation properties, the sound of the average Rwandan drum is low in frequency — this is commonly known as bass. The bigger the drum, the lower the frequency. Low frequencies have the properties of crossing long distances in comparison to mid- or high-frequencies. This is why the poorly soundproofed bar in your neighborhood gives you an impression of only leaking the bass component of the music.
Natural vs artificial sound
Natural sound is produced by natural phenomena like the wind, lake waves, or birds. These sounds can be pleasing and relaxing — at least we don't find them annoying.
The last two centuries have been characterized by a considerable number of technological breakthroughs. These technologies are still noisy and undesired sound emitted by machines drives us crazy and affects machine performance (not necessarily in a positive way). We hate this sound because it is rarely harmonic.
On the other hand, an artificial sound which is harmonic or contains sensory information — like music or speech — is genius because it plays a vital role in human exchange, cultural, educational, or economics.
Early sound technology could only deliver waves from a single source (mono or single channel sound). Naturally, we hear sound omnidirectionally, or else in 3D, because we have two ears located at one-eighty degrees from each other. This allows us to locate with a certain precision where sound is originating from within the three-sixty-degree space surrounding us. Today we are able to capture sound and spread it omnidirectionally, using multiple speakers performing in harmony — we call this technology Surround Sound.
I remember listening to Michael Jackson’s Thriller in stereo (2D sound) headphones and hearing the zombie footsteps coming from one ear, crossing my head and literally walking to the other ear. The experience would make me feel like the zombie was crossing the very same room I would be in, projecting a spatial impression and making sound more realistic.
Before electricity, sound could not be recorded. Listening to music required being live on site, alongside musicians — and the same applied for speeches. But thanks to advances in engineering today, sound technology allows us to electronically mimic an entire sound spectrum and this pushes civilisation in an era where sound patterns can be transmitted in time and space.
Many of us can now record ideas, knowledge, and information and transmit them across distances and generations.
We want to hear what you think about this article. Submit a comment.