Навігація
Посилання


Physics Post 03/30/2010 What are Quantum Computers?

Quantum computers are not that different from normal computers outwardly, but they are in the sense that quantum theory is the basis on which these computers operate. The end result is that they are put together in a completely different way.

A normal computer operates on the basis of units known as bits. Each byte in a normal computer can only be one of 0 or 1 and nothing else. No matter how many bytes you have, each computer at a single point in time can only occupy one combination of these bytes in order for the programming to actually work.

A quantum computer is different from this because of a principle in quantum mechanics known as superposition. If you think back to your high school science courses, you may have learned about superposition when looking at how waves like light and sound waves move from one point to another. Quanta can also be in superposition with respect to each other and the end result is that the quantum bits that make up the computer can actually be 0, 1 and any superposition of the two.

The more quantum bits (also known as qubits) that you have, the more possibilities they are. Because you are dealing with superposition, it also means that the different positions can be occupied simultaneously. Whereas a simple 8bit computer can only occupy one of the 256 positions generated by those 8 bits at once, the same 8-bit quantum computer could occupy all 256 qubit positions at once.

The end result is that quantum computers can be much more efficient than their conventional computer counterparts. Although quantum computers are still in their infancy, as the technology improves eventually it will become true that these computers will be able to calculate faster than the computers we have today. When that happens, the 3.0 GHz speed of a personal computer that we brag about now will be nothing in comparison to the new quantum computer models that become available on the market.

Physics, Math and Mental Gymnastics Author: David Mcmahon

Added: 06/21/2006

Mental Gymnastics

While we enter physics to study the fascinating world of black holes, quarks and the quantum, the brutal truth is that mathematics is the central tool of the physicist. Gauss called mathematics the «Queen of the Sciences», and with good reason. If you don’t have a solid grasp of mathematics, you aren’t going to get very far.

One thing I noticed when getting my degrees in physics was that many of the students found math to be a painful «aside». In one case that really stands out in my memory, I was in a mechancs course and one of the homeworks required the calculation of a brutal integral. I worked very hard by myself over the weekend and managed to get the calculation out with a couple of pages of work. When I returned to class, I was surprised to find that the vast majority of the students had not even attempted to work out the integral. One student had obtained the answer-so he thought-using Mathematica. I looked at it carefully and saw that he had gotten the wrong answer. He argued with me-asserting that the computer cannot mak

e a mistake-but we brought the TA over and it turned out he had entered the integral incorrectly. I had obtained the right answer by working it out by hand

The student in question had thought he was interested in physics but didn’t want to bother with the work of physics-which involves diving into the mathematics. But to become a good physicist-or a solid engineer-you need to bite the bullet and become a master of mathematics. It doesn’t matter if you’re going to be an astronomer, experimentalist, or engineer-in my view if you want to be the best at what you do in these fields, you should have a solid command of math. So if you are interested in physics but aren’t a mathematical hot shot, how can you pull yourself to the top of the field? In my view, the answer is to view mathematics the way you would athletics. A friend of mine who shared this view coined the term «mental gymnastics» to characterize his outlook and study habits.

We all aren’t Math Genuises

While for some students thinking mathematically comes natural, most of us aren’t ready to master the intricacies of studying proofs when we’re college freshmen. This article is written for those of us who aren’t automatic math whiz kids. If you are a mere mortal who finds math a bit of work, don’t be discouraged. It’s my belief that average people can raise themselves up to become very good mathematicians with a little bit of hard work. What we need is some training--we need to train our minds to think mathematically. The best way to think about how you can get this done is to draw an analogy between math and athletics.

To master a sport you have to build your muscles and train your body to react in certain ways. For example, if you want to become a great basketball player, you could be lucky enough to be born Michael Jordan. But more likely, you’ll have to work at building a basic skill set, and the truth is even players like Michael Jordan put extra work into their craft. Some activiities you might consider that could make you a better basketball player are

Lifting weights to build muscle mass

Run sprints to improve your ability to run up and down the court without getting tired

Spend a large amount of time shooting free throws, doing layups and

practicing basic skills like passing

It turns out that becoming a successful physicist or engineer is in many ways similar to athletics. OK, so suppose you want to study Hawking radiation and string theory, but you are not a hot shot mathematician and weren’t the best student. Instead of just reading a bunch of books or lamenting the fact we aren’t an Einsteinian genius, what are the mathematical equivalents to lifting weights or running sprints we can do to improve our mathematical ability? In my view, we can begin by following two steps

Learn the basic rules first-and don’t focus on trying to learn proofs or do

the hardest problems.

Repeat, repeat, repeat. Do similar types of problems over and over until they are second nature. Only after a topic becomes second nature calculationally do we consider reading the proofs or theorems in detail.

That is do tons of problems. In my view a student should start off simple. Don’t try to understand the proofs. For example, in my recent book, «Calculus In Focus», I take the perspective that students need to learn math by following the formula: show, repeat, try it yourself. That is

Show the student a given rule, like the product rule for derivatives

Focus on mastering calculational skills first. Do this by showing the student how to apply the rule with multiple examples.

Repeat, repeat, repeat. Do a given type of problem multiple times so that it becomes second nature.

Once the «how» to solve problems is second nature, then go back for a deeper look at the material. Then learn the «why» and start learning the formality of mathematics through proofs and theorems. I use this approach to drill the central ideas of calculus in my book Calculus in Focus. More information can be found at

Key Topics

In addition to the basic approach, a certain baseline has to be established if you want to build yourself up for a formal career in math, physics, or engineering. Let’s build up a fundamental skill set that is going to build your fundamental math skills and help you master any subject. A few key areas I think students should focus on are outlined below

The Importance of Algebra

If you study physics or engineering, algebra never goes away. So the first step on the road to becoming the next Stephen Hawking is to master this tedious yet fundamental subject. Do yourself a favor and pick up a decent algebra book and work through it. Do every problem so that by the end of the book, factoring equations, logarithms and other math basics are second nature for you. In the same way that lifting weights is going to make a football or basketball a better athlete when the games are actually played, mastering algebra will pay off later when you’re doing your homework in dynamics or quantum theory.

Trigonometry

If you go on to become an electrical engineer and study circuit analysis or decide to master black hole physics, one fundamental area of business you’ll have in common with your colleagues is trigonometry. Make sure you know your trig inside and out, learn what the trig functions really mean and master those pesky identities. Also don’t over look this one crucial fact-trigonometry also provides a simple arena where you can learn how to prove and/or derive results.

We all know that later, when you take advanced physics courses, you’re going to see the words «show that» pop up frequently in your homework problems. This is sure to cause headaches among the mere mortals amongst us, but it turns out you can improve your skills in this area in a non-threatening way by deriving trig identities. Instead of viewing the derivation of trig identities as a tedious obstacle, start to look at this as an opportunity. All trig books have homework problems where you have to derive an identity so pick up a trig book and do it until your blue in the face. Take it seriously and write up each proof as if you were submitting a short paper to a major journal. This will teach you how to go from point A to point B mathematically and how to write up a derivation in a formal way that will allow someone else to understand what’s going on. If you do, later it will be easier to get through homework in advanced classes, you’ll get better grades, and you’ll develop a good foundation for writing up theoretical derivations for research papers.

Graphing Functions

While any function can be graphed easily on the computer or on a graphing calculator, it is very important to be able to graph a function on the fly with nothing more than a pencil and paper. The key abilities you want to focus on are developing an intuitive sense for how functions behave and learning how to focus on how functions behave in various limits. That is, how does a function look when the argument is small? How does it behave as the argument goes to infinity? Dig out your calculus book and review techiques that use the first and second derivative to graph a function. I review these extensively in my recent book «Calculus in Focus».

Series and Complex Numbers

In my opinion, understanding the series expansion of functions and the behavior of complex numbers can’t be underestimated. If you want to understand physics, you need to master the use of series. Start by learning how to expand a function in a series. Some series should be second nature (‘oh yeah, that’s cosine»). Learn about convergence. Get a copy of Arfken and review the solution of differential equations using series. Try to get an intuitive feel for cutting a series off at a given term while retaining the essential behavior of the function. These are tools that are important when studying theoretical physics or advanced engineering.

David McMahon is a physicist who consults at Sandia National Laboratories and is the author of Calculus In Focus.

Does spooky action at a distance allow faster than light communication? Author: David Mcmahon

Added: 03/21/2006

Does spooky action at a distance allow faster than light communication

It is often said that scientists do their best work while young. With Albert Einstein this certainly seems to have been the case. Before the age of 40 he developed special relativity, laid the groundwork for quantum theory by explaining the photoelectric effect and in his greatest achievement, developed his elegant theory of gravity, general relativity. However, it was a paper he wrote with two colleagues in 1935-when Einstein was nearly 56 years old-which stands out as his most cited scientific paper. In fact, it may well turn out to be one of the most significant scientific papers of all time.

This is of course the «EPR» paper, written with his colleagues Boris Podolsky and Nathan Rosen. Following a decade of vehement arguments with the great Neils Bohr about the meaning of quantum theory, this paper stands out as Einstein’s «parting shot» in the debate-his last ditch effort to prove that quantum mechanics could not be a fundamental theory. The paper-titled «Can quantum mechanical description of reality be considered complete?»-uses quantum mechanics to demonstrate that particles which interact in someway become entangled, in a loose sense meaning that their properties become correlated. As we’ll see in a moment, this is not an ordinary correlation in any sense of the word. It implies that there exists a strange connection between the particles that persists even when they are separated by great distances. In some sense, this connection is instantaneous, putting it in direct conflict with the special theory of relativity. It was this strange connection that led Einstein to the phrase «spooky action at a distance».

Quantum Entanglement

The EPR paper is based on the following thought experiment. Two particles interact and then separate. Furthermore, we imagine that they separate such that they are a great distance apart at a time when measurements on the particles can be made. EPR focused on two properties in particular-the position and momentum of each particle. These properties or variables were chosen because of the Heisenberg uncertainty principle. The uncertainty principle tells us that the position and momentum of a particle are complimentary, meaning that the more you know about one variable, the less you know about the other. If you have complete knowledge of a particles position, then the particles momentum is completely uncertain. Or if instead you have complete knowledge of the particles momentum, then its position becomes completely uncertain. Intermediate ranges of accuracy are possible, the lesson to take home is that you cannot measure one variable without introducing some uncertainty into the value of the corresponding complimentary variable. The amount of uncertainty is quantified precisely by the uncertainty principle. The uncertainty of quantum mechanics never sat well with Einstein, he felt the theory, which is statistical in nature, is statistical because there exist some unknown or «hidden» variables in the microscopic world we are not yet aware of.

We now imagine that two particles interact and then move off in different directions. Because they have interacted, they become entangled. When two particles are entangled, the state of each particle alone has no real meaning-the state of the system can only be described in terms of the whole. In terms of elementary quantum mechanics, there is a wavefunction which describes the two particles together as a single unit. The wavefunction, being a superposition of different possibilities, exists in a ghostly combination of possible states. The Copenhagen interpretation tells us that the properties of the particle, position or momentum, don’t exist in definite values until a measurement is made.

When a measurement is made, and we can choose to make a measurement on one particle or the other, the wavefunction «collapses» and each particle is found to be in a definite state. The measurement results obtained for entangled particles are correlated. So if we make a measurement result on particle A and find its momentum to be a certain value, we know-without making a measurement on particle B-what its momentum is with absolute certainty. As EPR put it, by making a measurement of momentum on particle A, using momentum conservation tells us that pA + pB is an element of physical reality. In other words the wavefunction has collapsed and the variables have definite values-the ghostly superposition of possibilities is gone. The crucial point is that even though no measurement has been made on the distant particle B, the observer at the location of particle A has learned the value of B’s momentum. Somehow the wavefunction has collapsed instantaneously across a spatial distance-presumably in violation of the speed of light limit set by relativity.

The situation can be made even more interesting by noting that we can choose instead to measure the position of particle A. Again, using conservation principles, we will learn the value of the position of particle B, and the quantity qA — qB assumes physical reality.

Notice that the observer at position A can choose, by making different

measurements that he or she desires, which properties of particle B assume definite values-or assume physical reality in the terminology of EPR. They can make this choice at a later time without any prior agreement with an observer in possession of particle B. This is another aspect of spooky action at a distance. The observer at A makes a measurement choice-presumably chosen using the free will of the mind-and forces particle B into a definite value instantaneously.

The interpretation of these results is still in debate, some believe that the wavefunction only represents our state of knowledge about the system. However it seems that it would be difficult for anyone who believes this to examine diffraction images from electron scattering and deny that the wavefunction is a real physical entity.

In summary, it appears that the position or momentum of each member of the EPR pair is determined by measurements performed on the other, distant member of the EPR pair. The effect seems to be instantaneous, leading Einstein and his colleagues to refer to the phenomenon as «spooky action at a distance». The effect is non-local and appears to be instantaneous, but can anything useful come out of it? Can we exploit this to communicate faster than the speed of light? It turns out that as things are currently understood, the answer is no.

Teleportation

In recent years, it was shown that quantum entanglement could be exploited to transmit the state of a quantum particle from one place to another without having that state propagate through the space that separates the two locations. This certainly sounds magical enough-perhaps like something out of Star Trek-and is the reason that the investigators who discovered this phenomenon denoted it by the term teleportation. As we’ll see in a moment, teleportation demonstrates that despite the spooky action at a distance, special relativity is saved because the ability to communicate is limited in an unexpected way. A fundamental observation that should be made this is true even though teleportation is described using non-relativistic quantum mechanics-a theory where as long as no electromagnetic fields are involved, there is no ultimate speed limit.

We imagine two parties who wish to communicate with each other. In the quantum computing literature they are identified by the overused corny labels of Alice and Bob. It works like this. First, Alice and Bob meet. They create an entangled EPR pair. Then each party takes one member of the pair. Alice stays home, while Bob travels off somewhere, perhaps to Las Vegas.

In teleportation, the quantum particles used can have one of two states, so measurement results can be labeled by a 0 or a 1

Since Alice and Bob each have in their possession one member of an entangled EPR pair, a spooky action at a distance connection exists between them. Alice can exploit this connection to send Bob the state of a quantum particle. The process is quite simple and Alice just follows these steps.

First, Alice gets the particle she wants to send with Bob, and she allows it to interact with her member of the EPR pair. Then she makes measurements on her member of the EPR pair and the particle that she wants to send to Bob. Since she is making measurements of two particles, her possible measurement results are the two-bit combinations 00, 01, 10, and 11.

Since Alice has allowed her half of the EPR pair to interact with another particle, the state of Bob’s half of the EPR pair must have changed. It’s at this point that special relativity peaks its head in-through the back door. Although the state of Bob’s particle has changed, any measurement results he makes on his half of the EPR pair would be completely random. Bob has no information in his possession about the state of the unknown particle Alice wants to send him. Spooky action at a distance has occurred but at this point it’s completely useless. To get something out of the situation-Alice has to call Bob-on an ordinary telephone say-and tell him her measurement results. If Alice gets the measurement result 00, Bob doesn’t have to do anything-he now has the state of the particle Alice wanted to send him in his possession. However, that only happens 25% of the time, since Alice can get measurement results 00, 01, 10, and 11. If Alice gets measurement results 01, 10, or 11, Bob must make some measurements of his own on his half of the EPR pair in order to obtain the state of the particle Alice wants to send. We won’t get into the technical details, but in each case a different set of operations must be performed by Bob. Alice has to communicate which set of operations to use-based on the measurement result she obtained in the past-using a classical communications channel. Therefore the «instantaneous» nature of the interaction cannot be exploited until a classical communications channel is used.

The interesting thing about teleportation in my view is that it seems to say that special relativity has a major role to play in the transfer of information. In a way this is a fitting cap off to Einstein’s intellectual legacy. Einstein and Bohr both come out winners. Quantum mechanics stands on its own using the standard theory without hidden variables, yet what you can do with it is constrained by Einstein’s special theory of relativity.

An Introduction to Black Hole Thermodynamics Author: Sridhar Narayanan

Added: 10/28/2003

BLACK HOLE THERMODYNAMICS

An Introduction

Black Holes are and have been an anomaly since their prediction. They are not new to the world of Physics, infact it was Laplace who first predicted them

«A luminous star, of the same density as the Earth, and whose diameter should be two hundred and fifty times larger than that of the Sun, would not, in consequence of its attraction, allow any of its rays to arrive at us; it is therefore possible that the largest luminous bodies in the universe may, through this cause, be invisible.» — Pierre Laplace, The System of the World, Book 5, Chapter VI (1798).

The existence of Black Holes was later Mathematically Formulated as a solution of the GR equations. Since then many Astronomers have been trying to point out Black Holes in our Universe. Theoreticians have been playing with the Mathematics of Black Holes to find elegant solutions to many problems like the beginning and ending of the Universe, Time Travel etc… . What is so exciting about Black Holes that makes them controversial? In this article, we shall see a brief introduction to the Thermodynamic Laws that govern Black Holes…

How are Black Holes Formed?

A Star exists because the pressure developed inside the star due to Fusion of the Hydrogen in the star balances the Gravitational Attraction of the core of the star. Since these 2 balance each other the star maintains its size until the Hydrogen is present in it… Only until the Hydrogen is present in it. After all the Hydrogen in the star has been converted to Helium, the pressure inside the star is no longer able to compensate the Gravitational Attraction and hence the star reduces in size. This contraction further increases the temperature inside the star, that the Fusion of Helium begins and this develops so much pressure that the Pressure exceeds the Gravitational Attraction and the star expands. This stage when the star is huge is called a Red Giant stage. After this stage, when all the Helium is used up, the Star again collapses and now there are 2 possibilities that can happen:

If the mass of the Star < 1.2 times Mass of the Sun: now, the mass content of the star is not sufficient to start another Fusion and thus the star cools down to a White Dwarf star that emits radiation to wear of its internal Energy.

If the mass of the Star > 1.2 times the Mass of the Sun: now, the Star has enough mass to overcome the White Dwarf stage and thus another fusion starts and it proceeds until…

If the mass of the star is < 3 times and > 1.2 times the Mass of the Sun, then, the Star at one point of time explodes to excess pressure inside the star (Supernova Explosion) and a Rotating Neutron Star (Pulsar) is left.

If the mass of the star > 3 times and > 1.2 times the Mass of the Sun, then there is so much matter that Gravity dominates every inch of the Star and the Star collapses into itself. i.e. it undergoes successive contractions, upto a stage when there is so much concentration of mass in that region of space that a very high gravitational field is set up that is so powerful that not even light can escape from it.

This radius where the Gravitational Potential Energy of the Collapsed Star equals the Kinetic Energy possessed by light such that not even Light can’t escape beyond this region is called Schwarzschild Radius (rbh).

At the Schwarzschild Radius,

Let the Kinetic Energy possessed by light = K = (p*c)/2 [Mass equivalent for light = p/c]

p is the momentum of a Photon.

The Gravitational Potential Energy at rbh = P = [GM(p/c)]/ rbh where, M is the Mass of the Star

Thus, when K = P, rbh = (2GM)/c2

Below this radius, Light an never escape and above this radius the Gravitational Field of the Collapsed Star can be experienced. This effect due to which even Light cannot escape the gravity of the star makes it invisible to us, thus appearing invisible to us. Hence the name Black Hole (coined by John Archibald Wheeler).

We saw in the previous page, how a Black Hole evolved and the nature of its strong gravitational field. A Black Hole sucks in anything that comes near it!. Which means that there is so much of energy involved with a Black Hole, especially energy in the form of a heat. Curiosity, though may want to equate the disorderliness of the Black Hole to have an infinite process seeing it to be such a chaotic system, but, surprisingly, it is not (As shown by Prof. Stephen Hawking.). To understand the Thermodynamic Laws governing a Black Hole, we must first consider the following:

a) The realization from Quantum Mechanics that we can think of all matterenergy as waves.

b) The statement of Classical Physics that a wave in a confined region exits as a standing wave.

c) The realization from thermodynamics that the entropy can be viewed as a measure of the number of combinations or permutations of an ensemble that are equivalent. This is equivalent to viewing the entropy more conventionally as a measure of the heat divided by the temperature of a body. According to the Second Law of Thermodynamics, in a closed system the entropy never decreases.

d) The realization from Heisenberg’s Uncertainty Principle that when a sufficient amount of energy transfer takes place in a very short amount of time, then the energy transfer cannot be measured. This permits the violation of Law of Conservation of Energy for a very short amount of time.

e) The Classical Statement that any body that is above zero degree Kelvin will radiate energy as Electromagnetic Radiation.

f) Feynman’s Theory of Antimatter as regular matter going backwards in Time.

Virtual Pair Production:

In order to understand the concept of entropy related to a Black Hole, we must first understand the Phenomenon of Virtual Pair Production. The uncertainty relation between Time and Energy states that if the energy transfer in a system takes place in a very short time, then, the energy transfer cannot be measured. i.e. Energy Transfer and Time taken for the energy transfer cannot be simultaneously measured. According to this relation, one can extract energy out of no-where for a short amount of time to be distributed to its surroundings. This concept is applied to the phenomenon of Virtual Pair Production, where, a particle and an antiparticle are transformed from one to another every 10-35 seconds.

This is evident from the diagram on the left. i.e. The pair can only exist for 10-35 seconds. This is called the Planck Time. We believe virtual pairs of protonantiprotons, neutron-antineutrons etc. are continually being formed and disappearing everywhere in the universe. Wheeler, then, characterizes the vacuum at a scale of very small distances as being quantum foam.

THE BLACK HOLE HAS NO HAIR THEOREM

A Black Hole is created by a collapsing neutron star when all the neutrons are crushed out of existence. However, how can mass be crushed out of existence? The Total Mass-Energy of the system remains. In other words, if we define the event horizon to be the sphere surrounding the Schwarzschild Radius, then outside the event horizon, all the properties of matter that formed it are gone except for the total mass-energy, rotation and electric charge. This is called the Black Hole Has No Hair Theorem. We know that since Space-Time is like a fabric, it is curved by the existence of mass-energy. The total mass-energy from the Black Hole is manifested as the curvature of Space-Time around the singularity. Singularity is that point in a Black hole where R€0, R being the

A Black Hole is created by a collapsing neutron star when all the neutrons are crushed out of existence. However, how can mass be crushed out of existence? The Total Mass-Energy of the system remains. In other words, if we define the event horizon to be the sphere surrounding the Schwarzschild Radius, then outside the event horizon, all the properties of matter that formed it are gone except for the total mass-energy, rotation and electric charge. This is called the Black Hole Has No Hair Theorem. We know that since Space-Time is like a fabric, it is curved by the existence of mass-energy. The total mass-energy from the Black Hole is manifested as the curvature of Space-Time around the singularity. Singularity is that point in a Black hole where R€0, R being the radius. It is that point of a Black Hole that is most chaotic in nature and where the behavior of Space-Time is unpredictable. This Matter-Energy at the singularity is going to be considered as a wave trapped in a closed domain and the Entropy of the Black Hole studied Quantum Mechanically.

STANDING WAVES IN A BLACK HOLE

We know that all matter has a wave aspect, and Quantum Mechanics describes the behavior of these waves. So, we shall think about representing the mass-energy inside the event horizon as waves.

Now, what kinds of waves are possible inside the black hole? The answer is standing waves, waves that «fit» inside the black hole with a node at the event horizon. The possible wave states are very similar to the standing waves on a circular drumhead; they aren’t exactly the same because the waves exist in three dimensions instead of just the two of the drumhead, but they are very close to the same.

Note that I just said «three dimensions.» This is correct; we are using nonrelativistic quantum mechanics. The energy represented by a particular wave state is related to the frequency and amplitude of its oscillation. As we saw for the standing waves on a drumhead, the higher «overtones» have a higher frequency and thus these Quantum Mechanical waves contain more energy. Assume that the total mass-energy inside the event horizon is fixed. So, we have various standing waves, each with a certain amount of energy, and the sum of the energy of all these waves equals the total mass-energy of the black hole.

There are a large number of ways that the total mass-energy can distribute itself among the standing waves. We could have it in only a few high-energy waves or a larger number of low energy waves. It turns out that all the possible standing wave states are equally probable.

Thus, we can calculate the probability of a particular combination of waves containing the total mass-energy of the black hole the same way we calculated the probability of getting various combinations for dice. Just as for the dice, the state with the most total combinations will be the most probable state. But we have seen that the entropy is just a measure of the probability. Thus we can calculate the entropy of a black hole. We have also know that the entropy measures the heat divided by the absolute temperature.

The «heat» here is just the total mass-energy of the black hole, and if we know that and we know the entropy, we can calculate a temperature for the black hole. So, as Hawking realized, we can apply all of Thermodynamics to a black hole. Any body with a temperature above absolute zero will radiate energy. And we have just seen that a black hole has a non-zero temperature. Thus thermodynamics says it will radiate energy and evaporate.

We can calculate the rate of radiation for a given temperature from classical thermodynamics. How is this possible? Nothing can get across the event horizon so how can the black hole radiate? The answer is via virtual pair production.

Consider a virtual electron-positron pair produced just outside the event horizon. Once the pair is created, the intense curvature of space-time of the black hole can put energy into the pair. Thus the pair can become non-virtual; the electron does not fall back into the hole. There are many possible fates for the pair.

Consider one of them: the positron falls into the black hole and the electron escapes. According to Feynman’s view we can describe this as follows: The electron crosses the event horizon traveling backwards in time, scatters, and then radiates away from the black hole traveling forwards in time.

Using the field of physics that calculates virtual pair production etc., called Quantum Electrodynamics, we can calculate the rate at which these electrons etc. will be radiating away from the black hole. The result is the same as the rate of radiation that we calculate using classical thermodynamics. The fact that we can get the radiation rate in two independent ways, from classical Thermodynamics or from Quantum Electrodynamics, strengthens our belief that black holes radiate their energy away and evaporate.

This how Prof. Stephen Hawking argued that a Black Hole does radiate energy and evaporates at some point of time.

This proves that the Black Hole does not have an infinite value for its entropy, but, infact its value is finite and is equal to:

Sbh = Cn x Ah2

Where,

Sbh = Entropy of the Black Hole, Cn is a constant and Ah is the area of the Event Horizon. This Law was proposed by Stephen Hawking and is called BHAL — Bekenstein-Hawking Area Law.

A BONUS

Thermodynamics of the Universe

Consider the universe. It has a size of about 15 billion light years or so. It also has a total amount of mass-energy. If we represent this mass-energy as quantum mechanical standing waves, just as we did for black holes, we can calculate the total entropy of the universe.

It turns out that the entropy of either a black hole or the universe is proportional to its size squared. Thus for a given amount of total mass-energy, the larger the object the higher the entropy. But the universe is expanding, so its size is increasing. Thus the total entropy of the universe is also increasing.

This leads us to the idea that the Second Law of Thermodynamics may be a consequence of the expanding universe. Thus cosmology explains this nineteenth century principle. Put another way, we have realized that the direction of time, «time’s arrow,» can come either from the fact that the universe is expanding or from the Second Law of Thermodynamics.

We have now found a relationship between these two indicators of the direction of time. It is amusing to speculate about what will happen to the Second Law of Thermodynamics if the universe is closed, so that at some point the expansion stops and reverses.

Even more wild is the idea that if the expansion of the universe determines the direction of time’s arrow, then if the universe starts to contract the direction of time will also reverse. However, common sense tells us that this is not possible, for nature is intelligent enough to enforce the constraint upon us that Time will always move in the increasing direction and that whatever may be the case, our age will always keep increasing.

Whatever may be the case, one thing is for sure, Nature will always come out the way she wants to and so while dealing with her, we should go about without any pre determination of her characteristics...

She may not like it...and may never show up!!!!

Hope you enjoyed reading this article...

Energy Redshift Paradox Author: Erich Schoedl Added: 11/13/2003

This is assumes the reader understands time dilation from gravitational redshift, and has a basic understanding of mechanics. First, let me throw a thought experiment at you...

Set Up:

My UFO is extremely powerful, powerful enough to pull or push planets and even stars. One day I found a dead star system with a small non-rotating black hole that has an event horizon radius of 1,000 km. The system has two planets, A and Z. In the outer orbit (r(Z) ~ 1.5 x 10^8 km), planet Z is aging at the same rate as an Earth clock. Planet A is orbiting the black hole very closely (r(A) ~ 2700 km), and clocks on planet A age at only ½ the rate of a clock on planet Z due to time dilation from gravitational redshift predicted by the Schwarzschild solution. I’ll use gamma=2 for this relative redshift throughout this post only for simplicity.

I use my UFO to pull planet Z, and accelerate it with a given thrust 1 m/s^2. I fly in with my UFO and pull to accelerate planet A, also 1 m/s^2 with the same engine thrust I used on Z, so it is almost exactly the same mass as planet Z the way I see them locally. So as measured locally, mass of Z = mass of A (Say both are 6 x 10^24 kg). Now to simplify what is observed, I’m only pulling the planets in a perpendicular direction to the orbit radius (to avoid arguments regarding the radial contraction in the metric). And to avoid relativistic complications from velocity, let’s slow the orbit velocities to a stop just before running each experiment.

At planet A, me and my saucer age slower relative to Z (by General Relativity’s gravitational redshift), and the whole acceleration process then appears to occur slower to observers on Z because of time dilation. No big deal right? But when I leave a line attached to A, fly out to Z, and then try to pull A: I can only accelerate it about 0.5 m/s^2 with the same engine thrust I used before. Planet A feels twice as heavy (accelerates half as fast) to my UFO when I’m pulling it from the Z orbit! Why?

The Paradox:

If we break down acceleration, we find it is dv/dt, or a change in velocity over a change in time (the differential limit is where the changes approach infinitely small). And velocity is just dx/dt, or a change in distance x, over a change in time, so acceleration is dx/dt^2. Now the units are what are important.

Since the change in time at Z is twice the change in time at A because of time dilation, the acceleration at A when viewed from Z appears to be only 1/4 of the local acceleration, assuming the distortion of the Schwarzschild geometry (in time only). To paraphrase: When viewed from planet Z, the UFO is near planet A and appears to pull it only a quarter of the acceleration that is measured locally.

So what does this thought experiment mean? The paradox comes into play not with measures of force directly, but with energy. If I pull planet A locally with the force 6 x 10^24 (N) for 10 meters, this equals 6 x 10^25 Joules of work done on planet A.

Assuming my really long UFO cable is inextensible with negligible mass, I pull on the cable attached to planet A with the same force until I move 10 meters, but this time from the orbit radius of Z. The work done by my UFO should be the same in both instances: it is equal to the energy burned by the spacecraft fuel. Here’s the problem. The force felt at planet A when the UFO pulls from the Z orbit, would be 4 times as strong as the force the UFO exerts at the Z end of the cable.

All of the energy is spent in half the time as measured by clocks on planet A (assuming only the redshift time dilation from the Schwarzschild geometry). The force is 4 times as strong because the acceleration is 4 times as much, the 10 meter pull distance does not change from one frame to the other (in the tangent coordinate mentioned above x = x’ in the Schwarzschild metric). This would mean we did 4 times the work on A, where W’ = 4W or W = 1/4W’, just by pulling from the orbit radius of planet Z, while burning the same amount of fuel. See the diagram below on the left.

This is a paradox, because we could then devise a mechanical method to do this work on A from Z, go to A to collect this added energy, then return to Z with more energy than we started with! The extra energy isn’t surrendered by the mass of the system, or balanced out somehow, because it’s totally mechanical in nature. It violates the conservation of energy principle — a big no-no.

Solution:

Changing one simple thing can resolve the energy paradox. Since work = force times distance, generally, and force is mass times acceleration, we can put the equation for work done on A locally into the form: W = m (dx / dt^2) x. Now for the work done on A by the UFO acting from Z, we just use the form W’ = m’ (dx’ / dt’^2) x’.

The primes simply mean we are looking at a potentially different value from a different frame of reference (not all of the variables need to be different). Looking at the equation components strictly for the units, we can see that the square of time is in the denominator, and the square of distance is in the numerator along with mass.

Next let’s assume a few things. First, we should assume W = W’, or else there will be a violation of energy conservation as discussed. Next, let’s assume the gravitational time dilation is accurate: dt = 2dt’ in our case. Here dt is the rate of aging of a local clock on Z, and dt’ is the time dilated rate of a clock on planet A observed at planet Z (gamma = 2) that tics only half as fast in comparison. In Expanded Relativity, the dimensions of observed length also change (see «Unified Relativity» at for more on Expanded Relativity). In the tangent direction that we’re pulling the planets, the observed length is dilated in conjunction to the time dilation. So to pull planet A one meter, we would need to pull 2meters at planet Z, or dx = 2dx’. So the relativity factors of space dilation / time dilation cancel and W’ = m’ (dx / dt^2) x. For W’ = W, m’ would simply be equal to m. See the diagram above on the right.

The energy is conserved if we just consider that the dimensions of length are dilated in the metric, or stretched out (the radial dimension is complicated and contracted by mapping the coordinate system, so this experiment only considers the tangent dimensions as shown). By separating factors, the acceleration felt at A would be twice the acceleration experienced at Z, but the distance pulled at A would be only half the distance the cable was pulled at Z. If we pull planet A while in the Z orbit radius, and we suppose that the acceleration along the cable is constant, then we might assume the mass of planet A is twice its proper mass from the vantage of Z since we pull with a given thrust, but it only accelerates half as fast. This apparent increase in inertia is what was measured in the UFO thought experiment.

If you’ve read the «Unified Relativity» post, you’ve noticed an assumption was made that m’ = ãm because of the appearance of this extra inertia. This isn’t such a crazy mistake since the accelerations are redshift dependent. But you can see in the above treatment of work, the actual mass stays constant in order to stay consistent with the conservation of energy. The size of a mass appears to change, as well as the apparent inertia relative to a reference frame. Even the gravitational curvature caused by the mass is distorted relatively.

But as a mass is lowered toward the large gravity source, the actual rest mass, and the energy contained in the mass, must stay constant to maintain the conservation of energy principle. The potential energy argument made in «Unified Relativity» that mass is relative, is not exactly a valid contention in the sense of the energy contained in light of the arguments raised above (I’ll note the specific correction in that paper).

The observed effects of mass, however, are completely relative as described. This includes a relativistic increase in the Newtonian gravitational attraction for greater distances from the sun by relativity of acceleration described above.

In truth, when I began writing this assertion, I was hoping to fit the relativity of gravitational acceleration from the Sun to the anomalous observations of the velocities of the Pioneer 10 & 11, Galileo, and Ulysses spacecraft (there was an article on this in Discover magazine offering the MOND theory of gravity as an explanation to resolve the issue). The discrepancy involves an apparent slowing down of the velocity beyond that predicted by current theory (See Anderson’s paper for more on this measured discrepancy at - qc/0104064). Even the effects of Expanded Relativity that diverge from the Schwarzschild geometry are, however, orders of magnitude shy of explaining this particular discrepancy. Still, the dispute of the subtle energy paradox outlined above is very important, though difficult to measure without the UFO, to the foundation of conservation of energy in Relativity.

Employment in Physics

Author: Guest Writer

Added: 02/14/2006

Author: ZapperZ (PF)

Employment in Physics — Part 1

There have been frequent questions on the kinds of employment that are available for physicists. That question is very difficult to answer, because it depends on a number of factors, such as where you are, what degree you obtained, what area of specialization you went into, and what skill you have acquired.

I think it is best to start by simply pointing out the kind of job advertisements that most physicists in the market actually read. As far as I know, these are the two most popular sources of job listings aimed at physicists and others in similar fields such as astronomy, astrophysics, biophysics, chemistry, etc. Keep in mind that these job listings changes often, even weekly, and the number of listings also fluctuate during different times of the year. So sample them a few times to get a good idea of the kinds of jobs that are available.

A few of the items in the list are also for «studentship», or schools offering assistantships for students to pursue a Ph.D degree, sometime for a specific field of study. So not all of them are only for job-seekers.

Maybe this might influence you in the area of study you want to go into.. Zz.

Employment in Physics — Part 2

This is a continuation of the series on issues related to employment in physics and of physicists.

A new statistics on the salary increase of physics Ph.D’s working in the industrial sector in the US has just been released.

I am bringing this up because I want to make two important points:

That if you have the needed skills and specialities, your employability as a physics Ph.D transcends beyond just the typical academic boundaries, and that you CAN be employed in many industrial sector of the economy. This I have tried to emphasize in my «So You Want To Be A Physicist» essays;

That compared to many other areas of science and engineering, a physics degree holder in the industrial sector still makes a «comfortable», if not lucrative, living.

Zz.

Employment in Physics — Part 3

Once again, we hear «horror stories» based on anecdotal evidence of the difficulties in finding jobs in with a physics degree. While this is certainly can be true, the employability or desirability of a physics graduate depends HEAVILY on (i) the area of physics that that person specialized in (ii) whether it was theoretical or experimental (iii) the skills that the person acquired (iv) pedigree (i.e. who was his/her mentor).

Because of this, you can have someone (like Jonathan Katz) who sees people going into dispair due to not having a good career in physics, versus people like me who sees Ph.D’s in Medical Physics and Condensed matter physics being offered $70,000 upwards jobs in industries even before they graduate! Let’s get this VERY clear — what you choose to do in graduate school has a huge impact on your ability to get a job upon graduation! It can be the difference between having your options being narrowed to only employment in the academic institutions or research labs, and having a wider option to also be employable in industries.

This topic will certainly be a major part of a future installment of «So You Want To Be A Physicist» essay. But for now, if you want a good snapshot of the employment in physics, at least in the US, go past all of these anecdotal evidence and look at the statistics that have been compiled by the AIP.

Employment in Physics — Part 4

This time I’m making a reference to a recent article on the job market for a specific speciality — MRI Physicists.

I am bringing this up because I want to make two important points:

While this article focuses on a particular field, it also gives a broad feel to the job outlook in medical physics as a whole. In any case, the advice being given in this article echoes what I have been trying to get across in this series of essay, and in my «So You Want To Be A Physicist» essay — the ability to adapt to changing situations. To be able to do that, one must have as wide of a training and experience as possible to increase one’s changes of having the necessary skill.

Computer Systems Analysts

Nature of the Work

All organizations rely on computer and information technology to conduct business and operate more efficiently. The rapid spread of technology across all industries has generated a need for highly trained workers to help organizations incorporate new technologies. The tasks performed by workers known as computer systems analysts evolve rapidly, reflecting new areas of specialization or changes in technology, as well as the preferences and practices of employers.

Computer systems analysts solve computer problems and apply computer technology to meet the individual needs of an organization. They help an organization to realize the maximum benefit from its investment in equipment, personnel, and business processes. Systems analysts may plan and develop new computer systems or devise ways to apply existing systems’ resources to additional operations. They may design new systems, including both hardware and software, or add a new software application to harness more of the computer’s power. Most systems analysts work with specific types of systems— for example, business, accounting, or financial systems, or scientific and engineering systems—that vary with the kind of organization. Some systems analysts also are known as systems developers or systems architects.

Systems analysts begin an assignment by discussing the systems problem with managers and users to determine its exact nature. Defining the goals of the system and dividing the solutions into individual steps and separate procedures, systems analysts use techniques such as structured analysis, data modeling, information engineering, mathematical model building, sampling, and cost accounting to plan the system. They specify the inputs to be accessed by the system, design the processing steps, and format the output to meet users’ needs. They also may prepare cost-benefit and return-on-investment analyses to help management decide whether implementing the proposed technology will be financially feasible.

When a system is accepted, systems analysts determine what computer hardware and software will be needed to set the system up. They coordinate tests and observe the initial use of the system to ensure that it performs as planned. They prepare specifications, flow charts, and process diagrams for computer programmers to follow; then, they work with programmers to «debug,» or eliminate, errors from the system. Systems analysts who do more in-depth testing of products may be referred to as software quality assurance analysts. In addition to running tests, these individuals diagnose problems, recommend solutions, and determine whether program requirements have been met.

In some organizations, programmer-analysts design and update the software that runs a computer. Because they are responsible for both programming and systems analysis, these workers must be proficient in both areas. (A separate statement on computer programmers appears elsewhere in the Handbook.) As this dual proficiency becomes more commonplace, these analysts are increasingly working with databases, object-oriented programming languages, as well as client server applications development and multimedia and Internet technology.

One obstacle associated with expanding computer use is the need for different computer systems to communicate with each other. Because of the

importance of maintaining up-to-date information—accounting records, sales figures, or budget projections, for example—systems analysts work on making the computer systems within an organization, or among organizations, compatible so that information can be shared among them. Many systems analysts are involved with «networking,» connecting all the computers internally—in an individual office, department, or establishment—or externally, because many organizations rely on e-mail or the Internet. A primary goal of networking is to allow users to retrieve data from a mainframe computer or a server and use it on their desktop computer. Systems analysts must design the hardware and software to allow the free exchange of data, custom applications, and the computer power to process it all. For example, analysts are called upon to ensure the compatibility of computing systems between and among businesses to facilitate electronic commerce.

Job Outlook

Employment of computer systems analysts is expected to grow much faster than the average for all occupations through the year 2014 as organizations continue to adopt and integrate increasingly sophisticated technologies. Job increases will be driven by very rapid growth in computer system design and related services, which is projected to be among the fastest growing industries in the U.S. economy. In addition, many job openings will arise annually from the need to replace workers who move into managerial positions or other occupations or who leave the labor force. Job growth will not be as rapid as during the previous decade, however, as the information technology sector begins to mature and as routine work is increasingly outsourced to lower-wage foreign countries.

Workers in the occupation should enjoy favorable job prospects. The demand for networking to facilitate the sharing of information, the expansion of client server environments, and the need for computer specialists to use their knowledge and skills in a problem-solving capacity will be major factors in the rising demand for computer systems analysts. Moreover, falling prices of computer hardware and software should continue to induce more businesses to expand their computerized operations and integrate new technologies into them. In order to maintain a competitive edge and operate more efficiently, firms will keep demanding system analysts who are knowledgeable about the latest technologies and are able to apply them to meet the needs of businesses.

Increasingly, more sophisticated and complex technology is being implemented across all organizations, which should fuel the demand for these computer occupations. There is a growing demand for system analysts to help firms maximize their efficiency with available technology. Expansion of electronic commerce—doing business on the Internet—and the continuing need to build and maintain databases that store critical information on customers, inventory, and projects are fueling demand for database administrators familiar with the latest technology. Also, the increasing importance being placed on

«cybersecurity»—the protection of electronic information—will result in a need for workers skilled in information security.

The development of new technologies usually leads to demand for various kinds of workers. The expanding integration of Internet technologies into businesses, for example, has resulted in a growing need for specialists who can develop and support Internet and intranet applications. The growth of electronic commerce means that more establishments use the Internet to conduct their business online. The introduction of the wireless Internet, known as WiFi, creates new systems to be analyzed. The spread of such new technologies translates into a need for information technology professionals who can help organizations use technology to communicate with employees, clients, and consumers. Explosive growth in these areas also is expected to fuel demand for analysts who are knowledgeable about network, data, and communications security.

As technology becomes more sophisticated and complex, employers demand a higher level of skill and expertise from their employees. Individuals with an advanced degree in computer science or computer engineering, or with an MBA with a concentration in information systems, should enjoy favorable employment prospects. College graduates with a bachelor’s degree in computer science, computer engineering, information science, or MIS also should enjoy favorable prospects for employment, particularly if they have supplemented their formal education with practical experience. Because employers continue to seek computer specialists who can combine strong technical skills with good interpersonal and business skills, graduates with non-computer-science degrees, but who have had courses in computer programming, systems analysis, and other information technology subjects, also should continue to find jobs in computer fields. In fact, individuals with the right experience and training can work in computer occupations regardless of their college major or level of formal education.

How to Become a Computer Systems Analyst

Virtually all organizations in the U.S. are dependent on computer and information technology to perform specific functions and manage data and business aspects. In order to run efficiently, organizations must use technology and to integrate new evolving technologies prudently. Computer systems need updating and customizing on a regular basis. This is where the computer systems analyst comes in.

What does a computer systems analyst do?

Computer systems analyst is a blanket term for a computer professional that solves computer issues and uses technology to meet the needs of the company. These professionals might be employed under different titles: IT consultant, IT specialist, programmer analyst, business systems analyst, system architect and computer specialist, to name a few. These highly-trained professionals plan, design and expand new computer systems as well as configure software and hardware. They update/upgrade current computer systems and modify them for new or expanded functions. They are frequently charged with preparing cost reports for management.

Computer systems analysts usually collaborate with other professionals in the information technology field, such as programmers, network security specialists, and software engineers, and will sometimes specialize in specific systems such as accounting, business, engineering, financial, or scientific systems. Click here to find out how to become a computer systems analyst.

What kind of training does a computer systems analyst need?

Computer systems analysts are typically required to have a degree of at least bachelor level. Many employers may require a higher graduate level degree, as well as experience in the field for more complicated jobs and senior-level positions. Computer systems analysts have many different degrees, but typically, they have degrees in computer science, information technology, and management information systems. Click here to get a list of programs to get your management information systems degree online.

Qualifications vary by employer, but general qualifications include: broad computer systems knowledge, experience in employer’s field, specific computer system knowledge, logical thinking skills, great communication and interpersonal skills, and sound problem-solving and analytical skills. Internships are appropriate for students ready to graduate, as they do not usually require any experience.

What are the prospects for a career in computer systems analysis?

Computer systems analyst jobs are projected to increase much faster than average for all occupations. There are new job opportunities expected in most related career fields. As companies and organizations continue to upgrade their technologies, excellent job prospects for computer systems analysts are expected. Employment for computer systems analysts is projected to increase by 29% from 2006 to 2016 with 146,000 new jobs. Computer systems analysts will be in high demand as companies and organizations continue to implement and incorporate new advanced technology. (1)

How much do computer systems analysts make?

According to the Bureau of Labor Statistics, the median annual salary for computer systems analysts was $75,890 in May 2007. The middle 50% earned between $56,590 and $92,420 annually. The lowest 10% earned less than $43,930 and the highest 10% earned above $113,670. (1) Computer systems design and related services, management companies and enterprises, insurance carriers, and professional and commercial equipment and supplies merchant wholesalers had the largest median yearly salaries.

A career in computer systems analysis is a great choice for you if you enjoy working in a comfortable environment in an office or laboratory and spending long periods of time working on a computer. Click here to find your path to a computer systems analyst career.

Now what? The next step is easy!

Take the first step today and request free information from our selected top online computer science and it schools, or simply use the form below to find the program that is right for you!


загрузка...