Jump to content

Justin Gallagher

Members
  • Posts

    32
  • Joined

  • Last visited

Everything posted by Justin Gallagher

  1. When I first saw this movie, there was one big question I had. Why is the planet Miller not pulled into the Black Hole Gargantua by the gravitational force, and so I did some research and found out some cool stuff. One of the main reasons Planet Miller isn't pulled into the black hole in spite of its proximity is that the adviser, Kip Thorne made sure that Gargantua was a rapidly spinning black hole—and it turns out that the physics of rotating black holes differ from non-rotating ones. The sheer speed of Gargantua's rotation means there is a single stable orbit just outside of Gargantua's event horizon that is very stable. However, Gargantua would have to fill half the sky in order for it to be so close. With spinning black holes, the area where the time dilates as drastically as in the movie is expanded exponentially, which allows for a small area where an object can orbit. Another cool thing about this movie is the the tidal waves on the planet miller. According to The Science of Interstellar by Kip Thorne, Miller's planet is shaped a little like a football, with one end constantly pointing at Gargantua. The waves are literally tidal waves, so it's not the waves coming toward you, it's the planet rotating under you and the fixed waves slamming into you. But because the planet doesn't rotate, the waves wouldn't slam into you. Fortunately, tidally locked planets can rock back and forth, and Thorne used this as a scientifically accurate loophole to explain tidal waves on a tidally locked planet. Also, because the water on Miller is mostly concentrated in the waves, you could have knee-high oceans, like the one shown in the film.
  2. Building a potato (or lemon or apple) battery reveals a bit about the inner workings of electrical circuits. To do this simple science experiment, you insert two different metallic objects often a galvanized (zinc-coated) nail and a copper penny into the potato, and connect wires to each object with alligator clips. These wires can be attached either to the two terminals of a multimeter (which measures a circuit's voltage) or to something like a digital clock or lightbulb. (It may take two or three potatoes wired in series to generate enough voltage to power those devices.) The potato acts like a battery, generating a current of electrons that flow through the wire. This happens because acid in the potato induces a chemical change in the zinc that coats the nail. The acid acts as an "electrolyte," ionizing the zinc atoms by stripping two electrons from each of them and leaving them positively charged. Those electrons are conducted away from the zinc ions through the wire and through whatever devices lie along the circuit and end up at the copper penny. From there, they join up with positive hydrogen ions in the potato starch that have been repelled there by the nearby zinc ions. The movement of these electrons is enough to power a toy clock or light bulb.
  3. The spinning top, a toy found across many of the world's cultures and even among ancient archaeological ruins, lays bare some profound physical principles. The first is the conservation of angular momentum, the law that dictates that, in the absence of external influences, something spinning must keep spinning. Because a top balances upon a tiny point, it experiences a minimal amount of friction with the surface below it, and thus continues spinning for a delightfully long time, demonstrating the law. But as friction eventually slows the top, it becomes unstable and starts to wobble, leading to the demonstration of another principle, called "precession." When the top wobbles, its axis of rotation the invisible line running vertically through its center tips sideways, making an angle with the table. This angle allows the force of gravity to exert a "torque" on the top, putting additional spin on it, and this causes it to swing (or precess) outward in an arc, still spinning as it does so. In an effort to conserve its total angular momentum, the top precesses faster the slower it spins; this explains why tops typically lurch outward just as friction brings their spinning to a stop.
  4. If you run really fast, you gain weight. Not permanently, or it would make a mockery of diet and exercise plans, but momentarily, and only a tiny amount. Light speed is the speed limit of the universe. So if something is travelling close to the speed of light, and you give it a push, it can’t go very much faster. But you've given it extra energy, and that energy has to go somewhere. Where it goes is mass. According to relativity, mass and energy are equivalent. So the more energy you put in, the greater the mass becomes. This is negligible at human speeds – Usain Bolt is not noticeably heavier when running than when still – but once you reach an appreciable fraction of the speed of light, your mass starts to increase rapidly.
  5. The speed of light in a vacuum is a constant: 300,000km a second. However, light does not always travel through a vacuum. In water, for example, photons travel at around three-quarters that speed. In nuclear reactors, some particles are forced up to very high speeds, often within a fraction of the speed of light. If they are passing through an insulating medium that slows light down, they can actually travel faster than the light around them. When this happens, they cause a blue glow, known as Cherenkov Radiation, which is comparable to a sonic boom but with light. Incidentally, the slowest light has ever been recorded traveling was 17 meters per second – about 38 miles an hour – through rubidium cooled to almost absolute zero, when it forms a strange state of matter called a Bose-Einstein condensate. Scientists at the University of Darmstadt in Germany have stopped light for one minute. For one whole minute, light, which is usually the fastest thing in the known universe and travels at 300 million meters per second, was stopped dead still inside a crystal. This effectively creates light memory, where the image being carried by the light is stored in crystals. Beyond being utterly cool, this breakthrough could lead to the creation of long-range quantum networks — and perhaps, tantalizingly, this research might also give us some clues on accelerating light beyond the universal speed limit. To stop light, the German researchers use a technique called electromagnetically induced transparency (EIT). They start with a cryogenically cooled opaque crystal of yttrium silicate doped with praseodymium. A control laser is fired at the crystal, triggering a complex quantum-level reaction that turns it transparent. A second light source is then beamed into the now-transparent crystal. The control laser is then turned off, turning the crystal opaque. Not only does this leave the light trapped inside, but the opacity means that the light inside can no longer bounce around — the light, in a word, has been stopped. With nowhere to go, the energy from the photons is picked up by atoms within the crystal, and the “data†carried by the photons is converted into atomic spin excitations. To get the light back out of the crystal, the control laser is turned back on, and the spin excitations are emitted at photons. These atomic spins can maintain coherence for around a minute, after which the light pulse/image fizzles. In essence, this entire setup allows the storage and retrieval of data from light memory.
  6. Inflation has become a cosmological buzzword in the 1990s. No self-respecting theory of the Universe is complete without a reference to inflation -- and at the same time there is now a bewildering variety of different versions of inflation to choose from. Clearly, what's needed is a beginner's guide to inflation, where newcomers to cosmology can find out just what this exciting development is all about. The reason why something like inflation was needed in cosmology was highlighted by discussions of two key problems in the 1970s. The first of these is the horizon problem -- the puzzle that the Universe looks the same on opposite sides of the sky (opposite horizons) even though there has not been time since the Big Bang for light (or anything else) to travel across the Universe and back. So how do the opposite horizons "know" how to keep in step with each other? The second puzzle is called the flatness problem This is the puzzle that the spacetime of the Universe is very nearly flat, which means that the Universe sits just on the dividing line between eternal expansion and eventual recollapse. Ever since 1905, when Albert Einstein revealed his special theory of relativity to the world, the speed of light has had a special status in the minds of physicists. In a vacuum, light travels at 299 792 458 meters per second, regardless of the speed of its source. There is no faster way of transmitting information. It is the cosmic speed limit. Our trust in its constancy is reflected by the pivotal role it plays in our standards of measurement. We can measure the speed of light with such accuracy that the standard unit of length is no longer a sacred meter bar kept in Paris but the distance traveled by light in a vacuum during one 299 792 458th of a second. So I ask? Why do opposite sides of the universe look the same? It's a puzzle, you see, because the extremes of today's visible universe should never have been in touch. Even back in the early moments of the big bang, when these areas were much closer together, there wasn't enough time for light - or anything else - to travel from one to another. There was no time for temperature and density to get evened out; and yet they are even. One solution: light used to move much faster. But to make that work could mean a radical overhaul of Einstein's theory of relativity. During inflation the Universe expanded a factor of 1054, so that our horizon now only sees a small piece of what was the total Universe from the Big Bang. The cause of the inflation era was the symmetry breaking at the GUT unification point. At this moment, spacetime and matter separated and a tremendous amount of energy was released. This energy produced an overpressure that was applied not to the particles of matter, but to spacetime itself. Basically, the particles stood still as the space between them expanded at an exponential rate. Note that this inflation was effectively at more than the speed of light, but since the expansion was on the geometry of the Universe itself, and not the matter, then there is no violation of special relativity. Our visible Universe, the part of the Big Bang within our horizon, is effectively a `bubble' on the larger Universe. However, those other bubbles are not physically real since they are outside our horizon. We can only relate to them in an imaginary, theoretical sense. They are outside our horizon and we will never be able to communicate with those other bubble universes. Inflation solves the flatness problem because of the exponential growth. Imagine a highly crumbled piece of paper. This paper represents the Big Bang universe before inflation. Inflation is like zooming in of some very, very small section of the paper. If we zoom in to a small enough scale the paper will appear flat. Our Universe must be exactly flat for the same reason, it is a very small piece of the larger Big Bang universe. The horizon problem is also solved in that our present Universe was simply a small piece of a larger Big Bang universe that was in causal connection before the inflation era. Other bubble universes might have very different constants and evolutionary paths, but our Universe is composed of a small, isotropic slice of the bigger Big Bang universe.
  7. For many years, we believed that the Earth was flat and that one could eventually fall off of the globe. We also believed that the Earth was the center of the universe (some people still do). And several ancient civilizations even used to use mercury as a medicine. Fortunately, we tested these ideas and came up with better ones. Since Sir Isaac Newton described gravity in his publication, "Principia." in 1687, to John Michell conjectured that there might be an object massive enough to have an escape velocity greater than the speed of light, to 1970 when Stephen Hawking defined modern theory of black holes, which describes the final fate of black holes, we have always been fascinated about the nature of black holes. But what if I tell you that’s not it. You don't believe me? Well that’s okay. To those that do not believe me, leave now or just shut up and don't comment. To those that are interested, prepare to have your mind blown. Gravastar: What is it? This is an unconventional idea that is as interesting as it is odd. This hypothesis was originally put forward by Mazur and Mottola in 2004. Gravastar literally means “Gravitational Vacuum Condensate Star,†which is (in theory) an extension of theBose-Einstein Condensate and put forward as a part of gravitational systems. Ultimately, it is meant to stand as an alternative to black holes. One of the benefits of the Gravastar over that if an ordinary black hole is that of entropy, the current accepted models of black holes have them having a very large entropy value. Gravastars, on the other hand, have quite a low entropy. The theory goes that, as a star collapses further [past the point of neutron degeneracy] the particles fall into a Bose-Einstein state where the entire star [all of the collapsing material] nears absolute zero and is able to get very compact. As a result, it acts as a giant atom composed of bosons. The interior of these Gravastars is thought that it might be within a de Sitter Spacetime, which means that it has a positive vacuum energy which could give rise to an internal negative pressure. Most of the math that goes into explaining this new model for a black hole is extremely complex. It suffices to say that this theoretical model consists of 5 different layers that construct the Gravastar, with de sitter spacetime effectively creating the negative pressure that keeps the Gravastar from collapsing along with some other mathematical constructs. Instead of using the Einstein field equations to calculate the event horizon of a black hole, Mazur & Mottola put forward that the event horizon (as we know it) is actually the outer shell of the Bose-Einstein matter, anything that comes in contact with it becomes a part of it–similar to matter hitting a neutron star and being broken down into neutrons due to the environment. Over the past few years this model has been getting more and more attention as a contender of the current black hole model; however, it still only has a small “following†in the grand scheme of things.
  8. The Grand Unified Theory is a vision of a physics theory that can combine three of the four fundamental forces into one single equation. The four forces are the Strong Nuclear Force, the Weak Nuclear Force, the Electro-Magnetic Force, and the Gravitational Force. The EM and Weak forces were initially thought to be two separate forces until scientists discovered one theory (the Electro Weak theory) to explain both of them and then went on to observe this unified force in action (much like Maxwell unified the electric and magnetic forces into the Electro-Magnetic Force). If a Grand Unification of all the interactions is possible, then all the interactions we observe are all different aspects of the same, unified interaction. However, how can this be the case if strong and weak and electromagnetic interactions are so different in strength and effect? Strangely enough, current data and theory suggests that these varied forces merge into one force when the particles being affected are at a high enough energy. The grand unification energy is the energy level above which, it is believed, the electromagnetic force, weak force, and strong force become equal in strength and unify to one force governed by asimple Lie group. Specific Grand unified theories can predict the grand unification energy but, usually, with large uncertainties due to model dependent details such as the choice of the gauge group, the Higgs sector, the matter content or further free parameters. Furthermore, at the moment it seems fair to state that there is no agreed minimal GUT. The unification of the electroweak forces and the strong force with the gravitational force in a so-called "Theory of Everything" requires an even higher energy level which is generally assumed to be close to the Planck Scale. In theory, at such short distances, gravity becomes comparable in strength to the other three forces of nature known to date. This statement is modified if there exist additional dimensions of space at intermediate scales. In this case, the strength of gravitational interactions increases faster at smaller distances, and the energy scale at which all known forces of nature unify, can be considerably lower. This effect is exploited in models of large extra dimensions. The exact value of the grand unification energy (if grand unification is indeed realized in nature) depends on the precise physics present at shorter distance scales not yet explored by experiments. If one assumes the Desert and supersymmetry, it is at around 1016 GeV. The most powerful collider to date, the LHC, is designed to reach a center of mass energy of 1.4x104 GeV in proton-proton collisions. The scale 1016 GeV is only a few orders of magnitude below the Planck scale, and thus not within reach of man-made earth bound colliders at the current moment.
  9. Experiment after experiment has tried to find flaws in the Standard Model's predictions, but so far all the experimental evidence supports it. Nevertheless, scientists do not believe that the Standard Model provides complete answers to all our questions about matter. It describes everything we see in the laboratory. Aside from leaving gravity out, it's a complete theory of what we see in nature. But it's not an entirely satisfactory theory, because it has a number of arbitrary elements. For example, there are a lot of numbers in this standard model that appear in the equations, and they just have to be put in to make the theory fit the observation. For example, the mass of the electron, the masses of the different quarks, the charge of the electron. If you ask, "Why are those numbers what they are? Why, for example, is the top quark, which is the heaviest known elementary particle, something like 300,000 times heavier than the electron?" The answer is, "We don't know. That's what fits experiment." That's not a very satisfactory picture. When you look even closer there are many things wrong with this model. Some of these include: Gravity: Most important of all. Where the hell's gravity? A theory of quantum gravitation, or more formally quantum geometrodynamics, does not yet exist. Incorporating gravity into particle physics looks to be a horrendous challenge. Arbitrary parameters (like the mass of the electron) Planck Limits: The Standard Model describes quite accurately physics near the electroweak symmetry breaking scale (246 GeV). But the Standard Model is only a "low energy" approximation to a more fundamental theory. The Standard Model cannot be valid at energies above the Planck scale (~1019 GeV), where gravity can no longer be ignored. Cosmology: dark matter and dark energy. Cosmological observations tell us the standard model explains about 5% of the energy present in the Universe. About 26% should be dark matter, which would behave just like other matter, but which only interacts weakly (if at all) with the Standard Model fields. Yet, the Standard Model does not supply any fundamental particles that are good dark matter candidates. The rest (69%) should be dark energy, a constant energy density for the vacuum. Attempts to explain dark energy in terms of vacuum energy of the standard model lead to a mismatch of 120 orders of magnitude. Matter-antimatter asymmetry: the Universe is made out of mostly matter. However, the standard model predicts that matter and antimatter should have been created in (almost) equal amounts if the initial conditions of the Universe did not involve disproportionate matter relative to antimatter. Yet, no mechanism sufficient to explain this asymmetry exists in the Standard Model. Once we decide to tackle gravity, the Standard Model as we know it transforms beyond recognition and an ultimate Theory of Everything becomes possible. We could then say that physics has reached its end.
  10. The Higgs boson or Higgs particle is an elementary particle in the Standard Model of particle physics. Its main relevance is that it allows scientists to explore the Higgs field – a fundamental field first suspected to exist in the 1960s that unlike the more familiar electromagnetic field cannot be "turned off", but instead takes a non-zero constant value almost everywhere. For a subatomic particle that remained hidden for nearly 50 years, the Higgs boson is turning out to be remarkably well behaved. Yet more evidence from the world's largest particle accelerator, the Large Hadron Collider (LHC) in Switzerland, confirms that the Higgs boson particle, thought to explain why other particles have mass, acts just as predicted by the Standard Model, the dominant physics theory that describes the menagerie of subatomic particles that make up the universe. The new results show that the Higgs boson decays into subatomic particles that carry matter called fermions — in particular, it decays into a heavier brother particle of the electron called a tau lepton. This decay has been predicted by the Standard Model. Even so, the findings are a bit of a disappointment for physicists who were hoping for hints of completely new physics. On July 4th, 2012, the discovery of a new particle with a mass between 125 and 127 GeV/c2 was announced; physicists suspected that it was the Higgs boson, an elusive particle first proposed 50 years ago by English physicist Peter Higgs. In Higgs' conception, in the blink after the Big Bang, an energy field, now dubbed the Higgs field, emerged that imparts mass to the subatomic particles that trawl through it. Particles that are "stickier" and slow down more while traversing the field become heavier. Because subatomic particles are either matter carriers called fermions, such as electrons and protons, or force-carrying particles called bosons, such as photons and gluons, the existence of the Higgs field implied an associated force-carrying particle, called the Higgs boson, which is like a ripple in that field. The 2012 discovery left little doubt that the Higgs boson exists, however, there were still many unanswered questions. Is there one Higgs boson or multiple? If there are multiple, what are their masses? And just how do these different-flavored Higgs behave? To answer those questions, physicists still had to pore over tons of data from the LHC, which accelerates protons to just below the speed of light, then smashes them together, creating a shower of subatomic particles. When the LHC collaborators analyzed those Higgs events, they found about 6 percent of the elusive particles decayed into tau leptons. And though not unexpected, the new results show no hint of additional Higgs bosons that would lend credence to alternate theories such as supersymmetry, which predicts that every particle currently known has a "superpartner" with slightly different properties. The idea of the Higgs decaying to tau leptons was somewhat tacked onto the Standard Model after its creation, yet this addition to the Standard model turns out to be how nature does it. But there are still a few pieces left to complete the picture predicted by the Standard Model.
  11. While I visited the Rochester Institute of technology over the break, I talked to a Junior who was majoring in Physics. He was explaining to me what he was working on and theorized. He was currently working on the Grand Unified Theory. This interested me quite a bit so I did some research into this subject. It all starts with the Fundamental forces and their Interactions. There are 4 fundamental forces that have been identified. In our present Universe they have rather different properties. Properties of the Fundamental Forces: The Strong Nuclear Force is very strong, but very short-ranged. It acts only over ranges of order 10-13 centimeters and is responsible for holding the nuclei of atoms together. Since the protons and neutrons which make up the nucleus are themselves considered to be made up of quarks, and the quarks are considered to be held together by the color force, the strong force between nucleons may be considered to be a residual color force. In the standard model, therefore, the basic exchange particle is the gluon which mediates the forces between quarks. Since the individual gluons and quarks are contained within the proton or neutron, the masses attributed to them cannot be used in the range relationship to predict the range of the force. When something is viewed as emerging from a proton or neutron, then it must be at least a quark-antiquark pair, so it is then plausible that the pion as the lightest meson should serve as a predictor of the maximum range of the strong force between nucleons. The Electromagnetic Force manifests itself through the forces between charges (Coulomb's Law) and the magnetic force, both of which are summarized in the Lorentz force law. Fundamentally, both magnetic and electric forces are manifestations of an exchange force involving the exchange of photons . The electromagnetic force holds atoms and molecules together. In fact, the forces of electric attraction and repulsion of electric charges are so dominant over the other three fundamental forces that they can be considered to be negligible as determiners of atomic and molecular structure. Even magnetic effects are usually apparent only at high resolutions, and as small corrections. The Role of the Weak Nuclear Force in the transmutation of quarks makes it the interaction involved in many decays of nuclear particles which require a change of a quark from one flavor to another. It was in radioactive decay such as beta decay that the existence of the weak interaction was first revealed. The weak interaction is the only process in which a quark can change to another quark, or a lepton to another lepton - the so-called "flavor changes". The Gravitational Force is weak, but very long ranged. It is by far the weakest of the four interactions. The weakness of gravity can easily be demonstrated by suspending a pin using a simple magnet (such as a refrigerator magnet). The magnet is able to hold the pin against the gravitational pull of the entire Earth. Yet gravitation is very important for macroscopic objects and over macroscopic distances. It is the only interaction that acts on all particles having mass; it has an infinite range, like electromagnetism but unlike strong and weak interaction; it cannot be absorbed, transformed, or shielded against and it always attracts and never repels.
  12. In 1889, inspired by a famous astronomical drawing that had been circulating in Europe for four decades, Vincent van Gogh painted his iconic masterpiece “The Starry Night,†one of the most recognized and reproduced images in the history of art. At the peak of his lifelong struggle with mental illness, he created the legendary painting while staying at the mental asylum into which he had voluntarily checked himself after mutilating his own ear. But more than a masterwork of art, Van Gogh’s painting turns out to hold astounding clues to understanding some of the most mysterious workings of science. This fascinating short animation from TED-Ed and Natalya St. Clair, author of The Art of Mental Calculation, explores how “The Starry Night†sheds light on the concept of turbulent flow in fluid dynamics, one of the most complex ideas to explain mathematically and among the hardest for the human mind to grasp. From why the brain’s perception of light and motion makes us see Impressionist works as flickering, to how a Russian mathematician’s theory explains Jupiter’s bright red spot, to what the Hubble Space Telescope has to do with Van Gogh’s psychotic episodes, this mind-bending tour de force ties art, science, and mental health together through the astonishing interplay between physical and psychic turbulence.
  13. Hopefully you have read the Quantum Foam, blog, if not, that is fine. Commence the melting of your brains. Are ya ready? In physics, a spinfoam or spin foam is a topological structure made out of two-dimensional faces that represents one of the configurations that must be summed to obtain a Feynman's path integral (functional integration) description of quantum gravity. It is closely related to loop quantum gravity. Loop Quantum Gravity has a covariant formulation that, at present, provides the best formulation of the dynamics of the theory of Quantum Gravity. This is a Quantum Field Theory where the invariance under diffeomorphisms of general relativity is implemented. The resulting path integral represents a sum over all the possible configuration of the geometry, coded in the spinfoam. A spin network is defined as a diagram (like the Feynman diagram) that makes a basis of connections between the elements of a differentiable manifold for the Hilbert spaces defined over them. Spin networks provide a representation for computations of amplitudes between two different hypersurfaces of the manifold. Any evolution of spin network provides a spin foam over a manifold of one dimension higher than the dimensions of the corresponding spin network. A.K.A. Spin foam may be viewed as a quantum history. Spin networks provide a language to describe quantum geometry of space. Spin foam does the same job on spacetime. A spin network is a one-dimensional graph, together with labels on its vertices and edges which encodes aspects of a spatial geometry. Spacetime is considered as a superposition of spin foams, which is a generalized Feynman diagram where instead of a graph we use a higher-dimensional complex. In topology this sort of space is called a 2-complex. A spin foam is a particular type of 2-complex, together with labels for vertices, edges and faces. The boundary of a spin foam is a spin network, just as in the theory of manifolds, where the boundary of an n-manifold is an (n-1)-manifold. In Loop Quantum Gravity, the present Spinfoam Theory has been inspired by the work of Ponzano-Regge model. The concept of a spin foam, although not called that at the time, was introduced in the paper "A Step Toward Pregeometry I: Ponzano-Regge Spin Networks and the Origin of Spacetime Structure in Four Dimensions" by Norman J. LaFave. In this paper, the concept of creating sandwiches of 4-geometry (and local time scale) from spin networks is described, along with the connection of these spin 4-geometry sandwiches to form paths of spin networks connecting given spin network boundaries (spin foams). Quantization of the structure leads to a generalized Feynman path integral over connected paths of spin networks between spin network boundaries. This paper goes beyond much of the later work by showing how 4-geometry is already present in the seemingly three dimensional spin networks, how local time scales occur, and how the field equations and conservation laws are generated by simple consistency requirements. The partition function for a spin foam model is, in general...
  14. Yeah you heard that right. I just said Quantum Foam. And the best part, it is an actual science term. It's time to blow up some minds. Like I said In my "The imposible Conundrum" blogs, how things can be created from nothing, this is what happens down at the quantum level of this idea. Quantum foam (also referred to as space-time foam) is a concept in quantum mechanics devised by John Wheeler in 1955. The foam is supposed to be conceptualized as the foundation of the fabric of the universe. Additionally, quantum foam can be used as a qualitative description of subatomic space-time turbulence at extremely small distances (on the order of the Planck length). At such small scales of time and space, the Heisenberg uncertainty principle allows energy to briefly decay into particles and antiparticles and then annihilate without violating physical conservation laws. As the scale of time and space being discussed shrinks, the energy of the virtual particles increases. According to Einstein's theory of general relativity, energy curves space-time. This suggests that—at sufficiently small scales—the energy of these fluctuations would be large enough to cause significant departures from the smooth space-time seen at larger scales, giving space-time a "foamy" character. This relates to other theories in many ways. In the "The Impossible Conundrum" Blogs, the arising then annihilating cause "vacuum fluctuations" which affect the properties of the vacuum, giving it a nonzero energy known as vacuum energy, itself a type of zero-point energy. However, physicists are uncertain about the magnitude of this form of energy. The Casimir effect can also be understood in terms of the behavior of virtual particles in the empty space between two parallel plates. Ordinarily, quantum field theory does not deal with virtual particles of sufficient energy to curve spacetime significantly, so quantum foam is a speculative extension of these concepts which imagines the consequences of such high-energy virtual particles at very short distances and times. Spin foam theory is a modern attempt to make Wheeler's idea quantitative. Tune in next time as I talk about Spin Foam.
  15. To really sum everything up is quite simple. According to the strong anthropic principle, there are either many different universes or many different regions of a single universe, each with its own initial configuration and, perhaps, with its own set of laws of science. In most of these universes the conditions would not be right for the development of complicated organisms; only in the few universes that are like ours would intelligent beings develop and ask the question: "Why is the universe the way we see it?" The answer is then simple: If it had been different, we would not be here! There are something like ten million million million million million million million million million million million million million million (1 with eighty zeroes after it) particles in the region of the universe that we can observe. Where did they all come from? The answer is that, in quantum theory, particles can be created out of energy in the form of particle/antiparticle parts. But that just raises the question of where the energy came from. The answer is that the total energy of the universe is exactly zero. The matter in the universe is made out of positive energy. However, the matter is all attracting itself by gravity. Two pieces of matter that are close to each other have less energy than the same two pieces a long way apart, because you have to expend energy to separate them against the gravitational force that is pulling them together. Thus in a sense, the gravitational field has negative energy. In the case of a universe that is approximately uniform in space, one can show that this negative gravitational energy exactly cancels the positive energy represented by the matter. So the total energy of the universe is zero. Now twice zero is also zero. Thus the universe can double the amount of positive matter energy and also double the negative gravitational energy without violation of the conservation of energy. One could say: "The boundary condition of the universe is that it has no boundary." The universe would be completely self-contained and not affected by anything outside itself. It would neither be created nor destroyed. It would just BE. The idea that space and time may form a closed surface without boundary also has profound implications for the role of God in the affairs of the universe. With the success of scientific theories in describing events, most people have come to believe that God allows the universe to evolve according to a set of laws and does not intervene in the universe to break these laws. However, the laws do not tell us what the universe should have looked like when it started - it would still be up to God to wind up the clockwork and choose how to start it off. So long as the universe had a beginning, we could suppose it had a creator. But if the universe is really completely self-contained, having no boundaries or edge, it would have neither beginning nor end: it would simply be. Like the south pole on the earth. What is south of the south pole? When you understand what I am saying, then as yourself: What place, then, for a creator? So when people ask if a God created the universe, I tell them that the question itself makes no sense. Time didn't exist before the big bang, so there is no time for God to make a universe. It's like asking for directions to the edge of the earth. In early history, the answer would simply be travel in any direction and you will eventually get there. But eventually one person came along and asked for proof and found everything about the earth having an edge was wrong. The earth is a sphere. It doesn't have an edge, so looking for it is a futile exercise. We are each free to believe what we want, yet it is my view that is the only one that has evidence. The one that is always the simplest explanation: There is no god. No one created the universe and no one directs our fate. There is no meaning to life. We are here by the tweaking of laws over an infinite number of time. This leads me to a profound realization. There is probably no heaven or hell, and no afterlife either. We have this one life to appreciate the grand design of the universe, and for that, I am extremely grateful.
  16. As I said in Part 1, which you should read before this part, Some would claim the answer to these questions is that there is a God. One who chose to create the universe the way it is. It is reasonable to ask who or what created the universe, but if the answer is that “God chose toâ€, then the question has merely been deflected to that of who created God. In this view, it is accepted that some entity exists which needs no creator, and that entity is called God. It has been claimed, however, that it is possible to answer these questions purely within the realm of science, and without invoking any divine beings. According to the idea of model-dependent realism our brains interpret the input from our sensory organs by making a model of the outside world. We form mental concepts of our home, trees, other people, the electricity that flows from wall sockets, atoms, molecules, and other universes. These mental concepts are the only reality we can know. To put it more simply: We see the universe the way it is because we exist only in this universe. There is no model independent test of reality. It follows that a well-constructed model creates a reality of its own. An example that can help us think about issues of reality and creation is the Game of Life, invented in 1970 by a young mathematician at Cambridge named John Conway. The word “game†in the Game of Life is a misleading term. There are no winners and losers; in fact, there are no players. The Game of Life is not really a game but a set of laws that govern a two dimensional universe. It is a deterministic universe: Once you set up a starting configuration, or initial condition, the laws determine what happens in the future. What makes this universe interesting is that although the fundamental “physics†of this universe is simple, the “chemistry†can be complicated. That is, composite objects exist on different scales. At the smallest scale, the fundamental physics tells us that there are just live and dead squares. On a larger scale, there are gliders, blinkers, and still-life blocks. At a still larger scale there are even more complex objects, such as glider guns: stationary patterns that periodically give birth to new gliders that leave the nest and stream down the diagonal. If you observed the Game of Life universe for a while on any particular scale, you could deduce laws governing the objects on that scale. For example, on the scale of objects just a few squares across you might have laws such as “Blocks never move,†“Gliders move diagonally,†and various laws for what happens when objects collide. You could create an entire physics on any level of composite objects. The laws would entail entities and concepts that have no place among the original laws. For example, there are no concepts such as “collide†or “move†in the original laws. Those describe merely the life and death of individual stationary squares. As in our universe, in the Game of Life your reality depends on the model you employ. The example of Conway’s Game of Life shows that even a very simple set of laws can produce complex features similar to those of intelligent life.
  17. What is the Meaning of Life and Why are we Here? Most if not all of you readers thought of the number 42. Well even though we can thank Douglas Adams for that one in his novel, "The Hitchhiker’s Guide to the Galaxy†the answer is not that simple. Some would claim the answer to these questions is that there is a God. One who chose to create the universe the way it is. It is reasonable to ask who or what created the universe, but if the answer is that “God chose toâ€, then the question has merely been deflected to that of who created God. In this view, it is accepted that some entity exists which needs no creator, and that entity is called God. Personally, I feel that the reason we are here is very simple. Everything, the meaning of life and why we are here, is nothing more than the laws of physics at work. As discovered throughout time. The sun, the moon, and the planets are governed by fixed laws rather than being subject to the arbitrary whims and caprices of gods and demons. At first the existence of such laws became apparent only in early civilizations. The behavior of things on earth is so complicated and subject to so many influences that early civilizations were unable to discern any clear patterns or laws governing these phenomena. Gradually, however, new laws were discovered in areas other than astronomy, and this led to the idea of scientific determinism: There must be a complete set of laws that, given the state of the universe at a specific time, would specify how the universe would develop from that time forward. These laws should hold everywhere and at all times; otherwise they wouldn't be laws. There could be no exceptions or miracles. Gods or demons couldn't intervene in the running of the universe. At the time that scientific determinism was first proposed, Newton’s laws of motion and gravity were the only laws known. We have described how these laws were extended by Einstein in his general theory of relativity, and how other laws were discovered to govern other aspects of the universe. The laws of nature tell us how the universe behaves, but they don’t answer the why? Life's Improbability How could the apparently miraculous design of living forms appear without intervention by a supreme being? This can be shown, as part of M-theory, by what is known to today’s sciences as the multiverse theory. It states that there are billions upon trillions of other universes. To visualize all the universes, we can describe them as a line. A line stretching in both directions endlessly. Any point on the line would be a universe with certain values to the fundamental laws, such as strength of the strong nuclear force, and the gravitational constant. Some points will create a universe that, just after creation, will destroy itself, falling back into a single point. Others will not have a strong enough binding force and never create any large mass objects and just expand faster and faster indefinitely. If you pick two points at random, the universes you pick would seem very differently. However, by picking two points close to each other you see that they fundamental laws are very similar with only the slightest variation between each. This represents how the multiverse concept explains the fine-tuning of physical law without the need for a benevolent creator who made the universe for our benefit. If you keep going down the line of infinite universes, you will eventually get a universe that can sustain itself. Then a little farther, you will find one that can support galaxies, then stars, then solar systems with orbiting planets. Just a tiny fraction father you will find one that can support life. This is a video Stephen Hawking did for his series Into the Universe. It helps to describe this idea. http://www.discovery.com/tv-shows/other-shows/videos/other-shows-into-the-universe-with-stephen-hawking/ Tune in to part 2
  18. Weclome to part 2 of this blog series, If you have not read part one, please read it before reading this. Here is the Link to part 1: One of the great theories of modern cosmology is that the universe began in a Big Bang. This is not just an idea but a scientific theory backed up by numerous lines of evidence. For a start, there is the cosmic microwave background, which is a kind of echo of the big bang; then there is the ongoing expansion of the cosmos, which when imagined backwards, hints at a Big Bang-type origin; and the abundance of the primordial elements, such as helium-4, helium-3, deuterium and so on, can all be calculated using the theory. But that still leaves a huge puzzle. What caused the Big Bang itself? For many years, cosmologists have relied on the idea that the universe formed spontaneously, that the Big Bang was the result of quantum fluctuations in which the Universe came into existence from nothing. That’s plausible, given what we know about quantum mechanics. But physicists really need more — a mathematical proof to give the idea flesh. The new proof is based on the idea that the Big Bang could indeed have occurred spontaneously because of quantum fluctuations from the use of a special set of solutions to a mathematical entity known as the Wheeler-DeWitt equation. In the first half of the 20th century, cosmologists struggled to combine the two pillars of modern physics— quantum mechanics and general relativity—in a way that reasonably described the universe. As far as they could tell, these theories were entirely at odds with each other. The breakthrough came in the 1960s when the physicists John Wheeler and Bryce DeWitt combined these previously incompatible ideas in a mathematical framework now known as the Wheeler-DeWitt equation. The new work of Dongshan and co explores some new solutions to this equation. At the heart of their thinking is Heisenberg’s uncertainty principle. This allows a small empty space to come into existence probabilistically due to fluctuations in what physicists call the metastable false vacuum. When this happens, there are two possibilities. If this bubble of space does not expand rapidly, it disappears again almost instantly. But if the bubble can expand to a large enough size, then a universe is created in a way that is irreversible. The question is: does the Wheeler-DeWitt equation allow this? It must be proven that once a small true vacuum bubble is created, it has the chance to expand exponentially. Their approach is to consider a spherical bubble that is entirely described by its radius. They then derive the equation that describes the rate at which this radius can expand. They then consider three scenarios for the geometry of the bubble — whether closed, open or flat. In each of these cases, they find a solution in which the bubble can expand exponentially and thereby reach a size in which a universe can form—a Big Bang. That’s a result that cosmologists should be able to build on. It also has an interesting corollary. One important factor in today’s models of the universe is called the cosmological constant. This is a term that describes the energy density of the vacuum of space. It was originally introduced by Einstein in his 1917 general theory of relativity and later abandoned by him after Hubble’s discovery that the universe was expanding. Until the 1990s, most cosmologists assumed that the cosmological constant was zero. But more recently, cosmologists have found evidence that something is causing the expansion of the universe to accelerate, implying that the cosmological constant cannot be zero. So any new theory of the universe must allow for a non-zero value of the cosmological constant. What plays the role of the cosmological constant in Dongshan and co’s new theory? Interestingly, these guys say a quantity known as the quantum potential plays the role of cosmological constant in the new solutions. This potential comes from an idea called pilot-wave theory developed in the mid-20th century by the physicist David Bohm. This theory reproduces all of the conventional predictions of quantum mechanics but at the price of accepting an additional term known as the quantum potential. The theory has the effect of making quantum mechanics entirely deterministic since the quantum potential can be used to work out things like the actual position of the particle. However, mainstream physicists have never taken to Bohm’s idea because its predictions are identical to the conventional version of the theory so there is no experimental way of telling them apart. However, it forces physicists to accept a probabilistic explanation for the nature of reality, something they are generally happy to accept. The fact that the quantum potential is a necessary part of this new mathematical derivation of the origin of the universe is fascinating. All in all, everyday we come closer to full understanding how everything came to be, and yet we are still far from our destination.
  19. In this Two Part Blog Series, we will be looking at how the universe from the big bang, can exist out of nothing. According to the First Law of Thermodynamics, nothing in the Universe (i.e., matter or energy) can pop into existence from nothing. All of the scientific evidence points to that conclusion. So, the Universe could not have popped into existence before the alleged “big bang†(an event which we do not endorse). Therefore, God must have created the Universe. One of the popular rebuttals by the atheistic community is that quantum mechanics could have created the Universe. In 1905, Albert Einstein proposed the idea of mass-energy equivalence, resulting in the famous equation, E = mc^2. We now know that matter can be converted to energy, and energy to matter. However, energy and mass are conserved, in keeping with the First Law. In the words of the famous evolutionary astronomer, Robert Jastrow, “The principle of the conservation of matter and energy…states that matter and energy can be neither created nor destroyed. Matter can be converted into energy, but the total amount of all matter and energy in the Universe must remain unchanged forever†(1977). The idea of matter-energy conversion led one physicist to postulate, in essence, that the cosmic egg that exploded billions of years ago in the alleged “big bangâ€â€”commencing the “creation†of the Universe—could have come into existence as an energy-to-matter conversion. If this is true then one just asks: “Where did the energy come from?†Energy could not have popped into existence without violating the First Law of Thermodynamics. So in reality, when scientists argue that quantum mechanics creates something from nothing, they do not really mean “nothing.†The problem of how everything got here is still present. The matter generated in quantum theory is from a vacuum that is not void. Stephen Hawking can help put some light on this... From this you can easily see how something can come from "Nothing". To give a quick example, you can get the Number "0" If you add "1" and "-1". So what could this negative energy be. There are many ideas with what it could be. Some say it is anti-matter, some say it is the theroetical dark energy, while others say that the Big Bang produced a universe that travels in the oppisopte directing in the fourth Dimension, as neh297 showed us in his blog "Is There a Parallel Universe Moving Backwards in Time?". Stay tuned for Part 2 as we give this idea of something from nothing some meat on its bones with mathmatics.
  20. "This Sentence happily existed in all possible states before you observed it. Now it has collapsed into a single state. I hope you are satisfied." The Idea behind this one is a little (HUGE) part of quantum physics called Quantum entanglement, where pairs or groups of particles are generated or interact in ways such that the quantum state of each particle cannot be described independently—instead, a quantum state may be given for the system as a whole. If you observe one particle, the other one will become the exact same. Now yes I know some other Fiz-x students have done blogs about this, but I found an amazing experiment that helps to show this entanglement. Conventional imaging devices like cameras and x-ray machines create pictures by detecting photons that interact with the things being imaged. Now researchers have developed a new quantum imaging technique that shines a beam of photons on an object but then, instead of using these photons to form a picture, uses instead a completely different beam that has never come near the object. If this sounds a bit spooky, it is: what connects the two sets of photons and allows this technique to work is the bizarre quantum physics phenomenon known as entanglement. The advantage of a quantum entanglement camera like this is that you can illuminate an object using photons with a certain wavelength and then use entangled photons with a different wavelength to form the image. In the experiment, there are two paths down which a photon can travel. Each contains a crystal that turns the particle into a pair of entangled photons. But only one path contains the object to be imaged. According to the laws of quantum physics, if no one detects which path a photon took, the particle effectively has taken both routes, and a photon pair is created in each path at once. In the first path, one photon in the pair passes through the object to be imaged, and the other does not. The photon that passed through the object is then recombined with its other ‘possible self’ — which traveled down the second path and not through the object — and is thrown away. The remaining photon from the second path is also reunited with itself from the first path and directed towards a camera, where it is used to build the image, despite having never interacted with the object. A cardboard cut-out of a cat imaged by photons that never went through the cut-out itself. The researchers imaged a cut-out of a cat, a few millimeters wide, as well as other shapes etched into silicon. The team probed the cat cut-out using a wavelength of light which they knew could not be detected by their camera. The cat was picked in honor of a thought experiment, proposed in 1935 by the Austrian physicist Erwin Schrödinger, in which a hypothetical cat in a box is both alive and dead, as long as no one knows whether or not a poison in the box has been released. In a similar way, in the latest experiment, as long as there is nothing to say which path the photon took, one of the photons in the pair that is subsequently created has both gone and not gone through the object. So as you see, Quantum Entanglement is cool, and we are actually able to see it with our own eyes, aided with special instruments of course. So in conclusion, if you ever see a cat, put it in a box, it makes for a fun experiment.
  21. Recently I have gotten my hands on a spectacular game called "Elite Dangerous" by Frontier Developments. It is a realistic space adventure, trading, and combat simulator that is the fourth installment in the Elite video game series. Frontier Developments designed the game to include as much real world physics as possible. Piloting a spaceship, the player explores a realistic 1:1 scale open world galaxy based on the real Milky Way. That means that the player is able to explore the game's galaxy of some 400 billion star systems. With a galaxy this big, there seems to be a problem though, how would one travel from one star system to another let alone to the 400 billion others without breaking one of the most important theories ever conceived. Einstein's special Theory of Relativity, which states that a particle (that has rest mass) with subluminal velocity needs infinite energy to accelerate to the speed of light. The game uses what is called a Frameshift Drive which allows one to travel in a solar system up to 2000 times the speed of light, and make long distance jumps to a star 10 light years away in only a few seconds. Even though a device like this does not exist in real life, it actually follows a very strong proposed theory that would allow one to travel lights years in seconds with out actually traveling faster than the speed of light. Confusing, I know. In 1994, a Mexican physicist, Miguel Alcubierre, theorized that faster-than-light speeds were possible in a way that did not contradict Einstein by harnessing the expansion and contraction of space itself. Under Dr. Alcubierres hypothesis, a ship still couldn't exceed light speed in a local region of space. But a theoretical propulsion system he sketched out manipulated space-time by generating a so-called warp bubble that would expand space on one side of a spacecraft and contract it on another. An Alcubierre Warp Drive stretches spacetime in a wave causing the fabric of space ahead of a spacecraft to contract and the space behind it to expand. The ship can ride the wave to accelerate to high speeds and time travel. The Alcubierre drive, also known as the Alcubierre metric or Warp Drive, is a mathematical model of a spacetime exhibiting features reminiscent of the fictional "warp drive" from Star Trek, which can travel "faster than light" In this way, the spaceship will be pushed away from the Earth and pulled towards a distant star by space-time itself. Alcubierres theory, however, depended on large amounts of a little understood or observed type of exotic matter that violates typical physical laws. In general relativity, one often first specifies a plausible distribution of matter and energy, and then finds the geometry of the spacetime associated with it; but it is also possible to run the Einstein field equations in the other direction, first specifying a metric and then finding the energy-momentum tensor associated with it, and this is what Alcubierre did in building his metric. This practice means that the solution can violate various energy conditions and require exotic matter. The need for exotic matter leads to questions about whether it is actually possible to find a way to distribute the matter in an initial spacetime which lacks a "warp bubble" in such a way that the bubble will be created at a later time. Yet another problem is that it would be impossible to generate the bubble without being able to force the exotic matter to move at locally FTL speeds, which would require the existence of tachyons. Some methods have been suggested which would avoid the problem of tachyonic motion, but would probably generate a naked singularity at the front of the bubble. What this means, as stated by Dr. Alcubierre, "The warp drive on this ground alone is impossible. At speeds larger than the speed of light, the front of the warp bubble cannot be reached by any signal from within the ship. This does not just mean we cant turn it off; it is much worse. It means we cant even turn it on in the first place." So even though at the moment it seems Highly unlikely for this to actually work, as we progress with more knowledge, we eventually will find some way to travel to another star, FAST!
  22. For my Final blog of the 1st quarter, I decided to include some planetary destruction. One might say it is "Planetary Annihilation". So I present... Planetary Annihilation Planetary Annihilation is a real-time strategy computer game developed by Uber Entertainment, whose staff include several industry veterans who worked on Total Annihilation and Supreme Commander. Throughout my childhood, I loved strategy games due to the almost infinite possible outcomes, and the idea that one move could be the downfall of your empire. I enjoyed the Supreme Commander Series as a kid. The Idea that Literally, you could control a thousand plus units at one time, and obliterate the enemy. When I saw Planetary Annihilation, and who made it, I had to get it. But enough boring childhood memories that I cherish. LETS START PHYS-X-ING!!!!! One of the coolest concepts about Planetary Annihilation is the idea that not only can you leave the starting planet to go to others, but you can attach massive rockets to the planets and force them to crash into other planets. Here, take a look at the trailer which uses actual game footage... In the trailer, you can see how one can move planets to devastate the solar system. Lets say we wanted to move the Moon 1 meter with the use of four massive thrusters in only 10 seconds. What would the force be. Well the Moon has a mass of 7.34767309 × 10^22 kilograms. For this time interval, it would have an acceleration of .01m/s^2. That means that out of four thrusters, each one would have to exert at least 7.347 x 10^20 Newtons. This is insane, imagine what force it would take to move the moon into a collision coarse with the earth in only three minutes. Even Though mostly improbable, this is still a very cool aspect of the game. Even though smashing planets is fun, the real destruction comes from the "Annihilaser", a massive moon sized laser that can destroy entire planets in a matter of seconds. Sound a bit familiar? To do this, one has to travel to the Metal Planet and build five massive Catalyst which guide the massive energy through the planet and harness it to a single point. Let us use the Death Star from Star Wars to help us with this project because the output power of the Annihilaser is unknown. To obliterate a planet, we first must decide what to destroy. This planet is going to be modeled after earth with the exception that it is a solid planet. It is then possible to use the gravitational binding energy of the target planet to estimate the amount of energy required to be supplied to the Death Star's laser beam in order to destroy it. The energy required to destroy the planet in question is 2.25 x 10^32 J. However, the destruction of large planets such as Jupiter can require much larger energy demands. We can estimate this energy to be 2 x 10^36 J Since the Death Star outputs energy equal to several main-sequence stars, even if the actual composition of Earth is used in equation, the value yielded is only a few orders of magnitudes larger and the Death Star can still easily afford to output that energy due to its tremendous power source. However as mentioned above Jupiter requires much greater energy demands which would put considerable strain on the Death Star. To destroy a planet like Jupiter it would probably have to divert all remaining power from all essential systems and life support, which is not necessarily possible. If you have not guess by now, which would be quite sad, this laser was clearly inspired by the Death Star from Star Wars. If you have not seen Star Wars, shame on you, but here is a video of the Death Star Destroying a planet, with an added bonus. If you have seen all of the six Star Wars movies, you would have found that that bonus was to watch Jar Jar Binks get annihilated by the Death Star, which is a great thing to see.
  23. While being indecisive about what game I should make a blog about, I noticed that all three had scoped rifles in the game. So that's when I decided to make my blog about how well the three games implement bullet ballistics. So I present to you... G.O. Call of Elite Sni-per 3!!!! A mash of both Call of duty, Counter Strike: Global Offensive, and Sniper Elite 3. First let us start with Sniper Elite 3. Sniper Elite 3 is a game set in June 1942, where during the Battle of Gazala, OSS sniper Karl Fairburne is sent to assassinate General Franz Vahlen and uncover his top-secret project. To me this game is very addicting, mostly due to its grueling yet astounding bullet-time camera. This happens in the game when you fire from a long distance and hit the target in an interesting spot, such as the head, or as most players try to aim for, the testicles. Observe this Footage: The Bullet shown was shot from a Mosin-Nagant. This rifle has on average an 800m/s bullet. From the range of 657 meters,assuming no air resistance, we can use the equation X= Vot + 1/2at2 to calculate that the bullet traveled the 657 meters in the X-plane in .82 seconds. From this we can find that the bullet dropped 3.308 meters in the y-plane. However, this is calculated with out drag forces, so to find the bullet drop with air resistance, we need to find the drag coefficient, which through the use of the handy-dandy electronic web of information, we learn we must use the Ballistic coefficient Equation: BC=M/(Cd*A). Where BC is ballistic coefficient, M is the mass, and A is cross-sectional area. The Cross-sectional area of a bullet that can be fired from a Mosin-Nagant, the 7.62*52mm, would be .121 m2, with a mass of .012kg. For this bullet, the Ballistic coefficient would be 1.055 units for 657 meters. Using all this information, we calculate the drag coefficient to be .00137 units. From this, we can calculate the force of air resistance to be 1.1 Newtons using Fdrag=CV2. This allows us to find the new time it takes for the bullet to travel through the air, which is 1.09 seconds, which also allows us to find our new displacement in our y-plane, which is now to be 5.843 meters, very different than our calculation without Retarding Forces. Now let us look at the bullet Physics in Call of Duty: Modern Warfare 2. If you have played this game, you will know that this sentence makes no sense because there is no bullet drop. HECK, there is no ballistics in this game. The bullets travel out of your gun and travel in a straight line. Because of this flaw in the game, people have created a move called a 360o No-Scope. You spin your character around in a complete circle and blindly fire, hoping that through the shear improbability of it happening, which most of the time, travels through reality, just to kill some one as shown here. However, If you wanted to learn about the physics of the 360o No-scope, you can turn to this video of someone accurately describing the physics behind the infamous 360o No-scope in a game called Counter Strike: Global Offensive. Thank you very much for reading, and as always, watch out for impossible reality phasing projectiles.
  24. If you have seen part 2, you will know that this time we are going to be looking into sledgehammers. For my project, I required the use of a sledgehammer to pound 1 meter re-bar into the ground, 14 times. The reason I am using re-bar in this project is to allow the walk path to stay in the same position, while the ground shifts overtime. The average distance the re-bar traveled was .05 m, therefor requiring 20 swings, which each had an average force of about 100 N, exerted over an interval of .02 seconds. From this information, we can find the power of each strike and in total for all 14 bars. Power is equal to the change in Work over the Change in time. So, if we use the force and the displacement, we find that the change in work is 5 Joules. If we put this into our equation for power, we get 250 Watts per swing. From this we can determine that if it takes 20 swings per rod, and there are 14 rods, it requires a total amount of 70000 watts of power. If I used a heavier sledgehammer, I would be able to exert more force over the same time interval, thus displacing the re-bar farther into the ground. However, this would come at the price of exerting more force to keep the hammer up, tiring me out faster. Here are pictures of my fellow scouts helping hammer in the re-bar.
  25. If you have seen part 1, then you may know where I will be getting some of my information down. This time we will be talking about the use of wheel barrows. The wheelbarrow is designed to distribute the weight of its load between the wheel and the operator so enabling the convenient carriage of heavier and bulkier loads than would be possible were the weight carried entirely by the operator. Because of the design we can use the equation Torque=FL. The wheel barrows can carry 6 planks, with each being 6.8kg, that is a total of 40.8kg. If that force is applied .3 meters away from the fulcrum, and the force to exerted by the user is .75 m away from the fulcrum. That means that the 40.8 kg which excerts a force of 400.25 N will only require 160.1 N to lift. If we want to find the work with this force and Part 1's Distances, we find that in the x-plane, we use about the same force, so we exert the same 60000 joules, but now in the y-plane, we only exert 16010 joules. A significant difference than from part 1, and this allows twice as many boards. Due to this difference, I mainly used wheel barrows to carry the 90 planks down the trail. Next time we will look at Sledge hammers.
×
×
  • Create New...