This episode is sponsored by the Polytechnic Institute of València. Hello, I’m Aldo and what you see here below is my greeting but in binary code, in ones and zeros. I was able to do it using an tool that you too can find on the Internet. Clearly, you can do it manually, but it’s a bit harder. Although you would also be able to do it “earwise” using your ears. ♪♫ [Introductory Music]. ♪♫ These sequences of ones and zeros for each letter were established in 1963. as the American Standard Code for Information Exchange. Yes, as you’ll be told, this document is in English, having an English alphabet. so if your name is “Ñoño”, unfortunately, you won’t be able to translate your name to binary code. Sorry Ñoño… This system was created as a revolution of the telegraphic codes that were used previously. What purpose does it serve now? Not much; I mean, you aren’t going to write a love letter to your crush in binary code…however freaky you may be. Anyway, maybe. Don’t misunderstand me, I don’t mean to say that binary code isn’t useful, but rather that the manual translation of each letter isn’t useful. Binary code is used as a fundamental part of the technology we have today to process incredible quantities of information. But, to better understand how information is processed, let’s start with the basics. The smallest quantity of information that you can have is the answer to a question that has only two possible responses: yes or no, which we’ll represent using one or zero. What are these? Binary digits. “Binary digits” in English. Its abbreviation: bit. It’s a bit of information, one bit is “yes” and the other bit is “no”. Two possible results. A die has 6 possible results, assuming that it doesn’t land on one its corners (which is very unlikely), has 3 bits of information. The letters of the English alphabet which we saw previously use 5 bits of information. But if we add punctuation signs and other symbols contained in a standard document, everyone of these symbols and letters had 7 bits of information. What comes after 7? 8, and 8 bits was adopted by computers as the fundamental unit of computation. 8 bits is equal to 1 byte. And here comes the good stuff; in their first decades computers obviously couldn’t process a large quantity of information, being able to process only a few bytes. Here’s an example: The navigational computer from the Apollo space program (English abbreviation: AGC) had approximately 64 kilobytes of memory. This was enough for the astronauts to enter simple commands containing a subject and a verb to be able to control the spacecraft. (You can also download the list of the codes used by the astronauts as a PDF.) Yes, they’re online…and you can find them in the description. Even though the astronauts might have wanted to handle the modules manually, only the AGC could give them the navigational precision and the control to send them to the moon and return them back home safe and sound. This little device was one of the first devices to use integrated circuits. In other words, chips. And whats in a chip? Transistors, which we’ll see later. The AGC was the computer that the astronauts used in the modules during the moon landing. On earth, at the Lyndon B. Johnson space center in Houston, that’s why they always heard “Houston, Houston, Do you read?” There they also had computers, of course. They were IBM computers, and much more advanced than those that they had on board. They had developed programs of 6 megabytes (6 megabytes!) which allowed them to monitor what was happening on the spacecraft, and the astronauts’ biomedical information. In that moment, IBM had the most complex software which had ever been written. Yes, I know that many of you are going to say: “But Aldo, how were they able to do all this if the internet didn’t exist?” For those who only know the internet, the old world may seem to them a little primitive. But they still did interesting things, radio waves were commonly used to communicate back then. If you’re “old” enough, (in quotations), you’ll remember the analog TV, which had an antenna you had to move around so that you could get a good signal and see the channel well. A large part of TV has passed from analog to digital. But how do these two (analog and digital) function? Analog television uses radio waves, and through a complex process, these radio waves are converted to images and sound. Radio waves travel at the speed of light since they’re found in the electromagnetic spectrum. The live transmission from the moon only had a delay of a little more than a second because the speed of light is around 300,000 kilometers per second. 300,000 kilometers is almost the distance between the Earth and the moon. From here on, remember… Clearly the live transmission system and the cameras used for the moon landing were designed especially for these tasks. This is why a good part of the image quality was reduced, and why a good amount of interference could be noticed. When using analog televisions, when there’s a lot of interference, we notice that the image and sound distort. These problems are eliminated using digital television (which uses ones and zeros). How are they grouped? How are they reordered (the ones and zeros)? Through transistors, and now we arrive at transistors. A transistor takes a zero and sends it here, it takes a one and sends it there, depending on where it’s needed. Let’s see this in a more detailed example. We have this ordered quantity of zeros and ones with the meaning of the number 42. The transistors handles passing these zeros and ones to their respective group…perhaps to audio or color. The color can be the number 42, but that 42 can also be interpreted as a word. All of that information is processed in less than a blink of an eye. Then the transistors handle the organization of all that binary data depending on the quantity of memory and the quantity of information being processed. We’re talking here of better or more advanced technology. Yes, I know that some live transmission lag a bit, but this is due to the quantity of information being processed. A high resolution image can take quite a bit of time to be processed. The trip to the moon was transmitted in an analog manner, and today we use both analog and digital transmissions. For example, the sound waves produced by our voice are physical. They’re analog waves. These are captured by the microphone, also an analog device, the sound waves are converted to digital information through other devices so that they can be in our computer. Human hearing is analog; sound waves hit our eardrum, causing it to vibrate. These vibrations are transmitted through small bones through the middle ear until they reach the ear cilia, which pick up these vibration and covert them into nerve impulses, which the brain interprets as sound. The brain… And here we get into the interesting question. Is the human brain analog or digital? Think about it for a bit. We’ve said that hearing is analog because it’s physical, and has physical functions. Does this mean, perhaps, that the brain is also analog? Not necessarily. The brain is much more complicated. It’s neither one nor the other. But, it carries out functions…that use a signal that has properties in common with both analog and digital processing. The brain obviously is not a computer. It doesn’t use binary logic, nor does it do binary arithmetic. Information in the brain is represented by approximations and estimates rather than by exact values. Neither can it execute sequences of instruction without a margin of error. Thus, having all this in mind, we can say that the brain is not digital. However, at the same time the signals that are sent through the brain can be interpreted as existing or not existing. How is this possible? This is something like the “on” and “off” used to represent 1 and 0 in binary code. But instead of this, it’s either that the signal is fired or not fired. These pulses of all or nothing are a basic function of the brain. In other words, it doesn’t use a binary code using ones and zeros, but instead using “fired” and “not fired”. We can say this is fairly similar. For a better example, we have this graph of analog and digital signals and of how the brain’s neurons fire. These high points are when the neurons are the most active – here is where they fire. And here below is where they don’t fire. However, a neuron still functions through biochemical means internally, which can be said to be analog. But the brain processes information of in a joint manner. We can’t talk specifically of a digital brain or an analog brain. But rather, a brain that processes information in a different manner. And here another interesting question arises. Will our technology ever enable us to create a digital brain which fulfills the functions of a human brain? Would this be a wise thing to do? Perhaps. Why don’t we see how technology advances? Computers and digital televisions use microchips to process information. These microchips contain transistors that are different than the ones Apollo had. Apollo’s chips only had a few transistors. Now, we’re talking about millions of transistors…in a single microchip. These microchips can house millions of transistors which are smaller than a virus They’re so tiny, and the only limit that we have in making a faster, more effective computer is how many of these transistors we can house in a single microchip. In 1965, Gordon Moore, co-founder of Intel, predicted that the number of transistors which fit on a chip would double every two years. In other words, computers would become more powerful every two years. This is known as Moore’s Law. The law held for a good amount of time. In 1972 there were around 2,300 transistors on a chip. How many transistors do you think a chip had in 2006? Calculate it. Ah, I’ll help you. There were 300 million transistors on a single microchip in 2006. In that year, Moore’s Law began to expire… …because of a single reason. Each time it’s more difficult to reduce the space in between the atoms, and we can’t make the atoms smaller either. This is a big problem, so the search has begun for alternatives. One of these alternatives are thermal computers which use variations in temperature to represent bits of information. Another alternative is quantum computing, which doesn’t limit itself to either a “yes” or “no” answer, but due to the ever-so-unpredictable quantum world, can sometimes answer yes, other times no, and other times in between yes and no. It’s pretty crazy. However it may, when we obtain the technology to create an artificial brain which resembles a human brain, perhaps it wouldn’t be a good idea to create something too similar to the human brain, but rather something which complements our activities. Today, there are already computers which quickly carry out large calculations that no human can do. But we ourselves are also much better than machines at perception, reasoning, and manipulation of objects. We’ve invented airplanes; which help us transport ourselves, even though they aren’t birds. Neither do they try to be. Some think that the technological base should also be developed in this manner, with respect to computers, by trying to complement our human activity instead of imitating it, or even worse, replacing it But what do you think? Do you think that one day technology will enable us to create a brain which carries out the human functions? Do you still use an analog TV with a rabbit-ear antenna? Lets us know in the comments. That’s every thing for me, Thanks for watching, see you next time! ♪♫ [Conclusion Music]. ♪♫
Subtitles: djordjian fairpaltrusi’o

Leave a Reply

Your email address will not be published. Required fields are marked *