In the last decade, the uprise of mobile devices with touchscreens has been prominent, and there are 2 main types of touchscreens. The first, and cheaper style, is known as resistive, which uses 2 separated films that when come in contact they allow current to flow. This is what is used to determine the location of the touch, as wherever the current is flowing is where the user is currently touching. The issue with this system is that it requires physical movement of the plates, meaning it can be triggered by anything pushing it together, also if it's layers are no longer even they can touch if nothing is pushing on them, causing unwanted actions. The solution to these issues is the more complicated design, known as capacitive touch. This uses a system of 4 capacitors on each corner, and when the touch occurs, based on how the capacitance changes, the computer system can determine the position of the touch. This is exceptionally useful for avoiding accidental touches, and for creating a much more durable touch surface. Also, it enables much more precision and ease of use to the user, as they don't have to physically move anything, and so there is less to go wrong. The disadvantage of this is that water and anything else conductive greatly reduces the accuracy and usability of such a touch screen, as it messes with the currents. Thanks to this kind of technology, it is much easier for us to use our mobile devices with ease and precision.
Entries in this blog
Conventional light bulbs use a filament, something to run current through that will heat up and produce light. These are very inefficient however, as more energy is put towards producing heat than light. This is why the idea for CFL light bulbs came about, which use a current running through gas to produce it's light. This method is more efficient, but still isn't perfect. This is where LED's come in, as they are small, very efficient, and require little power to produce a good amount of light. The reason why all lights haven't been replaced with LED lights is because of their cost, which while it's ever decreasing is still more than that of regular filament bulbs. This cost discrepancy is made more dramatic by the fact that LED's don't produce as much light per dollar spent, so for bright bulbs such as flood lights, it gets very expensive. In order to make a bright bulb, a large amount of LEDs are required, and this effect increases the brighter the bulb gets. This is exaggerated by the fact that if companies wanted to replace all of their bulbs at once, it would be expensive to install all of them, since many large buildings use a large amount of lights. I have an LED bulb in my room, and it is noticeably not as bright as a conventional bulb, but it will last a lot longer than any filament based bulb. Also, due to the fact that LED bulbs use circuitry, it isn't complicated to implement other things into the bulb. My bulb has a bluetooth speaker in it, which works very well considering it's inside of a light bulb. I think that in the future when LED technology becomes cheaper and better at producing light, all light bulbs will eventually become LED, unless they are specifically another type for one reason or another.
Duct tape is famous for it's ability to hold things together, as it's adhesive is incredibly powerful. Duct tape is made with a woven fabric to make up the tape, which could be made of nylon or polyester. This is then coated with a layer of thin polyethylene, which makes up the tensile strength of the tape. This can hold upwards of 65 pounds before snapping, despite how thin the tape is. This makes up partially for duct tape's strength, but where it really shines is the adhesive. This adhesive is made up of rubber compounds, aimed at longer lasting bonding with the material. Normal tape uses polymers instead, which are far weaker at binding than rubber compounds. Duct tape is used by NASA, and during the moon landing was used to repair a broken fender on the rover. This trusty and faithful tool has been in use for decades, fixing whatever needs fixing. It's dependability has spawned the phrase "If you can't fix it with duct tape, you aren't using enough duct tape.".
The point of a tank's armor is to protect the crew inside from bullets, shells, and anything that may be potentially dangerous. The first tanks used metal plates that weren't too thick to protect from bullets, as that was all that was necessary. As the technology advanced, more armor was being put on tanks, so bigger guns were being used to break through this armor. This raised the issue of weight, as you can't just continue to add armor to a tank to make it safe, because the weight would make the tank immobile. The solution, then, had to be far more complicated to allow the tanks to maintain their protection while still being mobile. The solution was sloped armor. While there is no one person accredited for developing sloped armor, it's first uses on tanks were some of the early WWII Soviet tanks. The reason sloped armor is so effective is that it increases the effective thickness of the plate, without adding weight to the tank.
Here is a demonstration of how this is accomplished, and just how effective the technique is:
This plate of armor is only 100mm thick, but because of the way it is angled back, the effective thickness of armor that a shell would have to go through is twice that of the plate. This allows for the armor to be light and effective, making for a durable but maneuverable tank. The Germans hadn't discovered this technique until late into WWII, which is evident in their most iconic tank of the war, the Tiger. All of the armor is flat, and all corners of the tank are right angles. This made it very weak when facing tanks with effectively the same armor thickness, if not more, but were much more maneuverable. The development of sloped armor in the German tanks was very delayed compared to the allied forces, as all of the allies had tanks with sloped armor. This disconnect of technology, albeit in such a simple way, makes sloped armor an interesting technique in the theater of warfare that spanned Europe.
The original Tiger I:
The late war Tiger II tank, featuring armor sloped back at 40°, with a thickness of 150mm:
To calculate the effective thickness of the armor, the equation is as follows:
With T being the effective thickness of the armor, h being the nominal (regular) thickness of the armor, and α representing the angle of impact.
Everyone knows what Godzilla is, and the creature's exploits in destroying cities, but not many question the possibility of such a creature's existence. Godzilla is 355 ft. tall, and weighs 90,000 Tons, meaning that he is massive in both size and mass. This is equivalent to the size and weight of a cruise ship, but it's capable of walking around. With all this weight to tote around, is it even possible that Godzilla would be able to walk around, let alone simply stand? Well, seeing as his weight is equivalent to $3 trillion in gold, it would take a lot of effort to move him around. He would need an exceptionally large amount of energy to walk around, meaning he would have a huge appetite. This is assuming however that his bones would maintain the weight of his body, because at that size, bones aren't as strong per unit volume as they are on the scale of a human or elephant. As a result of the "square-cube" law, the mass and volume of the bone grows faster than the strength of the bone does, eventually reaching a point where just the skeleton itself would collapse. Assuming that Godzilla has bones that could support his weight, the next largest issue is his blood. Because of his size, he would have to have an extremely powerful heart, and the blood pressure in his lower body would be so much greater than that in his upper body, unless his blood was able to flow at an extremely fast rate. Also, due to the rate at which neural signals travel, his reactions to any sort of stimulus would be greatly delayed. Taking all of this into account, while Godzilla is a classic movie monster, his reality would be a stumbling broken mess that would flounder the moment it left the water.
Altimeters are used by many people in a variety of different situations, such as skydiving, flying, and even hiking. The purpose of an altimeter is to give the user a readout of their current altitude. This can be achieved in many different ways, but the most obvious method is to measure the air pressure. By taking into account the current temperature, one can calculate their current altitude. In early aircraft, this was done with analogue meters, rather than digital calculations. They weren't the most accurate, but still worked very well considering what they were being used for. Now, there are many different kinds of altimeters, but for small scale applications, many analogue altimeters are still being used, but technology allows for much more accurate readings. For hiking, being used with a topographical map, a hiker can determine their location. In skydiving, it's very important to know your altitude, as knowing when to pull the parachute is absolutely paramount. For usage in planes, there have been many different kinds of altimeters, but their general purpose has always been to allow pilots to determine their altitude, which can also be useful for determining if weather patterns are affecting the aircraft. The different kinds of digital meters include sonic and radar altimeters, which show relative altitude, meaning the altitude to the ground rather than sea level altitude. While altimeters are useful, they can only really be used in one way: to determine altitude.
Ever since the earliest days of the car, suspension systems have been in use to keep the ride comfortable. Seeing as how commonplace it is for cars to have good suspension nowadays, we often take for granted just what these systems do to make our daily that much more comfortable. There are many different kinds of suspension, almost every kind involves a sort of spring. The earliest system to be used in cars was a simple spring on the frame connecting to steering frame, allowing for some amount of movement inside the car. Another type of suspension was the "leaf spring" suspension, which consisted of multiple layers of metal beams laid out flat, with the one above it being slightly larger than the one below. This acted as a normal spring physically, but it was able to hold a much larger load, and as such many large trucks and even tanks used this type of suspension. Even today, many different types of heavy vehicles use this type of suspension, which is considerably more complicated to produce than a spring style suspension. The systems used on most modern cars today is a combination of a spring and a "dampener". The spring is what allows for movement of the axle, and the dampener slows down the bouncing. If there is no dampener, then the car would bounce all over the place, since the springs would simply oscillate back and forth. The dampeners slow down the movement, essentially adding a frictional force to the oscillation. This allows for the oscillations to stop very quickly, coming to rest back at the center. The combination of the two components allows for the car to not only mostly keep the interior level across uneven terrain, but also allows for the car's wheels to maintain traction while driving. Another type of suspension is magnetic, which uses electromagnets in replacement of the spring, allowing for control over the height of the suspension. The reason this suspension isn't very common is that it is expensive to implement, and mostly trivial unless you want to raise and lower your car a few inches, for whatever reason. The magnets act the same way the spring does, pushing back as the axle rotates up and down. We often don't think about how well these systems maintain the comfortable ride on our way to work or school.
The culture based around the idea of parkour is a sport entirely based around momentum. The idea is to be able to control your movements to be able to maintain your momentum while running around. Normally parkour involves a lot of vertical movement, as well as large horizontal movements. The difficulty comes from having the strength to pull yourself up and over obstacles, or if you're good enough being able to stay at speed and direct your momentum to allow for fluid movements in all directions. Classic examples of this are wallrunning to cross a gap, or using a corner to jump to a higher height. By using the wall, they run up and jump off, using their forward momentum to help keep traction, allowing them to gain extra upward momentum. While there are many different ways one can "Parkour", it's often done in an urban setting, which means there is a lot of hard ground if they fall. One technique that is used to avoid hurting one's self when jumping from a height is rolling when landing. The idea is to land loosely on your feet, bending your knees and tucking into a forward roll. This slows down the impact, and obviously greatly reduces the chance of injury. Once you have the physical aspect down, many people like to add flair to their routines, doing tricks and flips to add style to their maneuvers. Lots of these things involve doing these tricks while transferring from one stunt to the next. This is why it has become a sort of art to people, having their own style and look to make their own kind of parkour.
Space travel has used many unique forms of propulsion, such as solid fuel rockets, liquid fuel rockets, ionic thrusters, etc. The newest type is still debated as to whether or not it is even possible. The EMDrive seems to break the laws of physics, as it outputs more energy than it takes in. The inventor sent a prototype to NASA to be tested to confirm his claims, and their tests confirmed it. This whole concept has yet to be accepted, as it is such an outlandish thing, but it is currently being put through peer review. The implications of such a technology would make it incredibly easy for space travel, as we could have an essentially unlimited fuel source. This means that we can send out a very small probe with a rather large engine, providing lots of thrust without the need for a large fuel operation to go alongside it. This allows for the probe to accelerate faster, making maneuvers easier and allowing faster space travel overall. It is not yet known how much this would allow us to speed up space travel, but it isn't a "hyperdrive" or "warpdrive" by any means. That doesn't mean it wouldn't be revolutionary for space travel however, because being able to travel without the concern of fuel would expand the world of possibilities for space exploration greatly. But it is still likely just a fluke, and we have yet to figure out exactly why this EMDrive appears to output more than it takes in, seeing as nobody understands quite yet. It has been a topic of debate for 20 years now, and even still there have been no real updates on the situation, so it is highly likely the EMDrive will amount to nothing, but that is stopping nobody from imagining it's possibilities.
I found this software a while ago called "Space Engine", which can only really be described as a universe exploring tool. It's just a simple thing that allows you to fly around in space, starting from Earth. You control the speed you go at, and I feel this is the only thing to ever really give me a feel for the perspective on how large the universe really is. At first impressions I thought everything I saw was based on reality, but I found out that by default anything outside of what we have observed is procedurally generated, meaning that it is made up randomly by software and clever programming. This random generation is especially impressive, because you can fly in between galaxies, slow down to find a star, and then explore the planets and whatever else may be orbiting that star. I did this over and over until I found a planet that was labeled as a "warm terra", so I looked further and found it had everything: oceans, mountains, valleys, plateaus, etc. I compiled a bunch of the things I found, which were all interesting in their own way.
Here's a video I put together of some pretty scenes I found:
Desert planet with 3 stars:
Another view of the same 3 star system:
Mountain range with planet's rings going over the horizon:
Gas Giant in front of a galaxy:
Trailing edge of a galaxy perpendicular to horizon of a desert planet:
All of these formations were completely randomly generated, which is hard to believe considering how realistic they look.
Space engine is a free software, currently in a beta form that is being worked on slowly. You can download it here: http://spaceengine.org/
The kid's movie UP, while serving it's purpose as a kid's movie, isn't exactly known for being accurate in the physics department. In the movie, an entire house is lifted up by nothing more than balloons. This iconic scene is pretty, but is it probable? Discovery channel's Mythbusters decided to test it harnessing a small girl into a ton of balloons, and seeing how many it would take to lift her. They estimated about 2000 fully inflated helium balloons would be enough to lift the young girl, but with a sandbag dummy they found they would need upwards of 3500 balloons. After tying up all 3500 balloons to the harness, she was lifted into the air purely by the power of balloons. If it took 3500 balloons to lift a small girl, it would likely take millions of balloons to lift a house off of it's foundation and then into the air.
Almost every device we use has a thermometer in it, even if we don't know it. They are used to determine whether or not different electronics have overheated, or if they are too cold. They work on a different principle than a typical mercury thermometer, using a resistor that is affected more by temperature than a normal resistor. The changes in resistivity are used to judge the temperature, which can be measured very accurately depending on the quality of the resistor, and the supporting electronics. Not only is this useful for preventing damage to electronics because of high temperatures, but it is also useful for making small thermometers that can measure temp from just about anywhere.
A lot of people have used the Kinect for Xbox, and at the time of it's release in 2010 it was a new take on motion control technology. It allowed for control without a controller, by using 3 seperate cameras, 1 of which is a color camera, the other 2 are infared. They are seperated a small amount, just like our eyes to allow for the cameras to get a 3d model of what is in front of it, matched with the color camera to get an idea of the position of the person in front if the camera. This, along with the decent refresh rate of the cameras, makes it possible for the player to use their body to control whatever the game allows.
I recently watched the movie "Passengers", and I noticed that on multiple occasions what happened was actually accurate. A lot of other sci-fi movies have huge innacuracies that detract from the overall movie, but Passengers showed one really good example of this, which is a large spoiler, so fair warning to anyone who wants to see it. Jim, the main character, is blown out into space with a makeshift shield he had made. When he was thrown out of the ship, his tether broke and he was heading straight toward the thruster on the ship which would have burned him alive. His reaction was to throw the shield at the thruster, which gave the shield a lot of forward momentum, and because momentum is conserved, it gave him backwards momentum. Because of this, he survived and was eventually pulled back inside the ship.
I was first discovered to be colorblind in kindergarten, when the teacher had us coloring, and I grabbed the wrong crayon multiple times. Many people don't fully understand what colorblindness is, how it affects someone, and what causes it. My type of colorblindness is known as "Protanopia". That means I struggle to identify differences between red and green, blue and purple, and sometimes light greens with yellow. Whenever someone finds out I'm colorblind, the question they usually ask is "What colors can't you see?". This isn't how being colorblind is, it doesn't make all color disappear, and it doesn't make certain colors disappear. This is due to the way our eyes work, which is trichromatic, meaning there are 3 different main colors it senses: red, green, and blue. Colorblindness occurs when one of these three separate sensors, called cones, partially mimics another one. This causes the information from both to cancel out, resulting in muted colors. Since there are tons of these cones in each eye, the amount of faulty cones determines the severity of the colorblindness. My case of colorblindness isn't overwhelming, but is about average. I don't struggle with traffic lights, which is another common misconception people have about red-green colorblindness. On the other hand, I can almost never tell the difference between blue and purple, I just assume it's all blue. While this color deficiency effects me during all my waking hours, it isn't overly limiting.
We use microphones all over the place, and most people have one or more on them at any point in time. Most work on a fairly simple concept, using 2 plates. One of them is much thinner than the other, and acts as the diaphragm, the part that moves as a result of sound. The other one is thicker, and works to make the 2 plates into a capacitor. The sound waves change the distance between the two plates slightly, and therefore changes the capacitance of the system. These changes in capacitance are measured and turned into sound via speakers. Speakers work on a principal that is similar but opposite. Instead of measuring, the diaphragm is moved by varying electric fields in a coil around a magnet. By charging the coil with the right amount of electricity at the right time, it allows for sound waves that mimic what the microphone recorded to be produced. This is a very analog system, meaning it isn't controlled by a system of 1's and 0's being interpreted by a processor, but rather the strength of the charge resulting from sound into the mic. Obviously this can be converted back and forth from digital, but the speaker will always be a very analog type of technology.
A new type of software development that is being used to create realistic looking water without having large processing times is based off of approximating almost everything. The basic principal is to use a bunch of small spheres, and calculate how they would react in whatever situation, say water pouring out of a pipe. This would look like a large amount of balls rolling out of a pipe, but the real magic happens in the approximations that are used. The software uses how the balls move to judge how the water would move, and make it look like water by making the balls invisible, and adding water where the balls are, and if one gets separated it simulates how surface tension would be broken, and the water would form a droplet. This type of approximation allows the software to render realistic looking water at a resource cost that is far less than a typical simulation. By adjusting certain parameters, the viscosity and surface tension of the apparent fluid can be changed, allowing for this to be used to render all different types of fluids, not only water. This can also be adjusted to model smoke and fog, although with a largely different set of rules on the physics of each particle.
Here's an example from NVIDIA's tech demo before:
And then after the approximations:
We spend many hours every day looking at some sort of display, whether it be attached to a computer, phone, TV, or maybe even a car. These displays work on a relatively simple concept of using liquid crystals that change the color of the light that is provided by the backlight, which is usually white. This tech has replaced the old CRT (cathode ray tube) technology that shot electrons at a screen over and over scanning across to form the image. The next innovation in display tech is hopefully something like what Iron Man has in his suit, a type of transparent display. The current tech used would work somewhat as a transparent display, but the colors would be affected by whatever light is behind them, as no light is being produced by the display, only altered. This is where OLED (organic light emitting diodes) come into play, as they are one of the newest type of display technology, and they create their own light. This means that the colors you see are being produced pixel by pixel, rather than a white light being altered pixel by pixel. This also allows for better contrast, as each individual pixel can turn off, making the display capable of having true black rather than a sort of blocked backlight. This tech also allows for flexible and more varieties of displays, which are already being used in some TV's today.
A "Clean Room" is what it sounds like, a room which is very clean. There are varying types of them, from the variety used to make watches to those used to produce satellites. The general idea is to prevent contamination of the air, which is typically dust. In order to enter a clean room, one typically has to wear a full body suit that is meant to contain everything within the suit, so no dust enters the room. No makeup is allowed inside, as those types of particles easily come off and float around in the air. The systems that filter what little dust gets into the air are full room air circulation, meaning that the clean room is inside another larger room, one that houses the filtration and air circulation. This allows for different types of air flow, such as air that only flows directly downward, through a grated floor and back into the system. This is effective because larger particles will fall into the filtration rather than staying in the room, possibly contaminating something. The reason these need to be so clean is dependent on what the room is being used for. For example, in one that mechanical watches are assembled in, it's necessary to keep out any kind of dust from the minuscule gears to prevent any kind of possible mechanical failure. Clean rooms are used all over the world, and in a variety of ways. One of the largest is NASA's cleanroom in the Goddard Space Center, which is about the size of a small warehouse. Each employee in the room also wears a wristband that discharges static electricity to a ground wire, preventing any electrical damage and even possible attraction to dust as a result.
Every computer has millions, if not billions, of transistors in it. These transistors have one use, to control the flow of electricity. They act as a switch, but without any physical moving parts. Their size is incredible, since they work to allow electricity through on the atomic level, rather than a larger scale. The physical makeup of a transistor allows it to prevent the flow of electricity in one state, but when a small positive voltage is applied to the side, it allows the electricity to flow freely through it. Because of how small we are capable of making these transistors, we are beginning to run into issues that don't make much sense, like electrons jumping through a closed transistor. This comes as a result of quantum physics, and its seemingly random nature. If this weren't the case, transistors would continue to decrease in size until they were mere atoms in length. Even now, we are capable of creating incredibly tiny circuits, so much so that a small processor about an inch and a half wide can house upwards of 7.5 billion of them. If we allowed them to get smaller, data would become corrupt, as a single change in a 1 or 0 in a binary code can have catastrophic results.
The first microchips didn't need any kind of cooling, they were cooled by just the air around it. Now, they produce enough heat that it needs to be transferred away in order for the chip to function properly. The solution was to create a heatsink, an array of spread out metal fins in contact with the chip to transfer the heat. Over the years, these have increased in efficiency and size. The heat is spread out to the fins using copper pipes, and then fans push air over the fins to move the air away. Current pc hardware is so heat efficient that with certain parts, the fans can stay off and can maintain a low temperature. When the chip is being used, it outputs more heat, so eventually the fans turn on to cool the chip. Another method of cooling is also used in cars, where water is brought into contact with the chip (thermal transfer, not fluid transfer. The chip doesn't get wet.) and is pumped away to be cooled in a radiator. This is considered a more efficient, and quieter way, to cool computer hardware. The advancements in tech allow for the possibility of completely silent pc's, such as the one featured here.
The trend of flipping a water bottle through the air to make it land upright again grew rapidly, and has some interesting physics behind it. The difficulty in the trick comes from the fact that when the bottle isn't full, it doesn't spin around the center. What happens is the center of mass goes toward the bottom as more of the liquid is drained, so to the average observer the bottle flips in a strange and unpredictable way. But the rotation of the bottle is predictible, because the axis of rotation is the center of mass of the bottle. If you know this, it makes it easier to predict the movement of the bottle. The easiest center of mass would be close to the bottom, at a point where the height if the liquid equals the diameter of the bottle. This seems to make the motion of the bottle very predictable, as it makes the bottle rotate around the bottom.
Almost every stage production uses a pulley system or some sort of rope system to suspend something. Whether it be a curtain, backdrop, or even an actor, using counter weights to hold something up is an old practice. Ropes are all brought down from the ceiling to a row along the wall, over pulleys. These ropes have a plate with vertical pipes on the bottom, and the weights rest on top of the plate. The other end is attached to a horizontal bar on the stage, which multiple things can be attached to. If the weight equals the weight of the bar and what it's carrying, then the pulley system won't move. The weights won't ever be exact, so a brake system to lock the ropes in place is used to ensure that the bar is held in place.
Half Life 2 was the first game to have a proper 3d physics system implemented, and while it wasn't flawless, it worked. It allowed the player to grab specific items, and carry them around and throw them into other objects, which would react accordingly. This was shown off a lot throughout the game, since the developers were proud of it. Now, it has become commonplace for almost every game to have a physics engine, as it's called, although it doesn't have to be a main gameplay element.
The topic of physics is particularly interesting when you look at the most popular genre, first person shooters. Some games use a projectile to calculate where the bullet will go, while others just use a "hitscan" system. Hitscan is when the path of the bullet is determined as a ray, protruding directly from the player's gun. It could go perfectly straight where its being aimed but most of the time there is a small deviation. This method is simple but effective, as it allows for much faster calculations to be done by a multiplayer server. If the ray hits a player, do damage, if not it will either hit a wall or continue on to nothing. The projectile method actually launches the bullet and does all of the physics calculations for it, such as being affected by gravity and other such variables. This method is used by all of the games in the Battlefield franchise, and is an integral part of the gameplay. Because the fighting happens on such a massive scale, having the bullet travel realistically makes a big difference. Different weapons shoot with different initial velocities, and some have less gravity affecting them. This is used to make some of the weapons fair, such as being able to do more damage but having a much slower velocity makes leading and actually hitting the target much more difficult. The drawbacks of this system are that the server is doing hundreds or thousands of these calculations per second, and with up to 64 players on a server at once, it can be very resource intensive. For people with poor internet, it can result in some strange errors, such as a shot hitting you after you rounded a corner, or being hit by shots that clearly missed. Due to advancements in technology, these calculations have become more efficient, and the fact that we can create a relatively accurate simulation of that many bullets at once is an impressive feat.
Formula 1 cars are well known for being among the fastest cars to be raced competitively, and their inner workings are just amazing. These cars are really wide and low, giving them a low center of gravity. This helps with turning, as the centrifugal force doesn't tip the car as much, since there is less torque. This isn't the thing that helps these cars go so fast, their engines are incredible, and the body is extremely lightweight. It's made of a very complicated composite, part of it being carbon fiber. Since the car is so light, it could easily flip or fly up into the air on the crest of a ridge. Because of this, the concept of using wind resistance to push the car down was developed.
Called downforce, it's achieved by using large angled wings that are sloped up towards the back of the car. This way, the air pushes the car down into the ground, so much so that at a certain speed the downforce is greater than the weight of the car. Because of this, it is actually possible for a f1 car to drive upside down, although it would be a very difficult feat. Downforce is useful in many other ways, such as increasing the friction between the track and the tires, allowing for tighter turns. F1 cars are able to hold their speed through corners much better than other cars because they don't lose traction at speed.
The general convention in racing is to slow down to get around corners, but it's the exact opposite with f1 cars, since if the car slows down the downforce decreases, making the speed at which you can turn at lower. This dynamic makes f1 driving one of the most difficult racing sports in the world, requiring incredible reaction speed and skill to drive.