This file was generated from rg.txt on Wed Dec 16 17:42:53 GMT 2009
A: That depends. The largest numbers regularly used in calculations gives some measure of the level of civilisation parhaps. The invention of logarithms in a dank Scottish castle reveals strange pathways in man's progress from the caves. We expect chimpanzees to count banananas and cats to count meal times in the day, but only humans are concerned with the numerical values for the speed of light.
Excercise: Try learning the counting numbers from 0 to 10 in a language which you do not know yet.
Generally you cannot tell how the number is going to be stored in a computer. Privileged numbers such as Pi or friend's phone numbers are often remembered by people. Certain other numbers are better not stored but given generator functions. You don't care whether a right angle is ninety degrees or Pi/2 radians. Just keep three, four, five and a triangle.
Euclid described prime numbers, which are numbers with no factors apart from one and themselves, and irrational numbers which are numbers which cannot be expressed as fractions. The English mathematian Wallis is said to have invented a symbol like a figure eight laying on its side to represent infinity, a number bigger than any ordinary counting number.
Many numbers used to represent reality on computers are the result of sampling where a range of values is split into segments and a name or value is assigned to each segment. Colour displays and images are the most common example of this.
Fan Dee, Fan Lotteree Thai saying.Numbers games are another aspect of numbers. 'Running Numbers' has been dramatized by Hollywood films and folk-rock ballads. These emphasise a sort of struggle against 'state-control' of numbers. Buddhist monks in Thailand risk getting killed if they say too much about the next lottery number. There are cultures where almost everyone seems to bet on numbers. Next week's imaginary number becomes yesterday's real number. Just like an alcohol hangover. For many the lottery number is like the dreams of opium. One could say that Marx got it wrong and in place of the statement:-
Religion is the Opium of the MassesA: You mean numbers like 'forty two' which occur in the Hitch Hiker's Guide to the Universe series. There are many fundamental equations of physics, starting with Newton's equation for gravitation. All of these have values expressed in metric units although some organisations in the Anglo-Saxon world try to cling on to imperial units of weights and measures. Physical constants are usually given in 'floating point notation': X= M E EXP or M E -EXP where 'M' is a decimimal number, normally between 1 and 10 and the 'E' stands for a power of 10, so that 27.5 could be written 2.75 E1 and 1/8 can be written 1.25 E-1.
A selection of physical constants is given. The first group is taken from Appendix 2 of 'The Chemical Bond' by Linus Pauling.
Velocity of light c 2.99793 E10 cm/secA: Algebra is a way of anticipating problems in calculations by letting symbols stand for unknown numbers. It is said to originate with the Arabs. There are many Arabic words similar to Algebra. These often mean force or coercion. The power form is also a name for God.
'Modern Algebra' was invented in the 1800s, and one of the main advances was the interchange between numbers and functions or operators. The French mathematicians Fourier and Galois pioneered this line of thought. Algebra as tought in schools came from much earlier times. Italy saw the earliest European developments when Cardano and others came up with a method for solving cubic and quartic equations.
Formal algebra came with the attempts to build mathematics around a system of axioms in the style of Euclid. David Hilbert and Bertrand Russell made these attempts fashionable in the early 1900s, but Kurt Godel showed that this approach had limitations during the 1930s. For most computer science applications the formal approach is quite good enough.
In the 1960s and 1970s the English started to teach 'set' theory in schools. This was often done in an ideological vacuum. The kids were not always confronted by such basic questions as race and class in this context. Nor were they necessarily drilled in hard questions on set theory which had previously been in the syllabus: calculations about permutations and combinations. The success of the British National Lottery shows just how far things have fallen.
Sets are like packs of playing cards, boxes of chess pieces, selections of lottery numbers, or even populations loving, living and dying. Early mathematics teaching concentrated on the computational aspect of set theory, nowadays called Combinatorial Analysis. Horse races are an excellent example of sets, as are dictionaries.
The maintenance of knowledge about particular sets is of great economic importance. A goldrush mentality fuels speculation on world stockmarkets as search engine companies go public. The ignorance of next week's Lottery Numbers is also important for profits of the operator. The order of winners in a horse race or the subsets of football matches with draws or high scores provides the paydirt for a global money extraction industry.
A set of numbers can be written in brackets: {1 3 5 7 9} is the set of odd numbers less than ten. A set containing no elements at all is called the empty set and it is written {}. The set consisting of {0 1} is special, because 0 and 1 can be made to correspond to values 'true' and 'false'.
Sets are connected with logic by using the phrase 'a is a member of A', or 'b belongs to class B'. Such statements are always either true or false in classical set theory. Fuzzy logic is a new form of set theory allowing for intermediate values between true and false.
Modern Algebra uses certain conventions such as upper case for sets, and lower case for members. Many statements of logic can be translated to theorems in the algebra of sets. The most common are De Morgan's Laws. A union B or 'A u B' is defined as elements x which are members of A or B. A ^ B is defined as elements x which are in both A and B. A-B is the set of x where x is in A but not B. The number of elements in a set X may be written as c(X). For any set X and subsets A and B the following identities hold. Here '<=' stands for less than or equal to. If the intersection of two sets is empty then they are said to be disjoint.
c(A^B) <= c(A) <= c(A u B) <= c(X)If A is a subset of a set X then it is possible to define a post-fix function called % with A%=100.0*c(A)/c(X) where '/' stands for divide. The '%' function is merely the ordinary percent calculation.
Exercise:Let P={3 5 7 11 13 17 ...} be the set of odd primes and let E={6 8 10 ..} be the set of even numbers greater than four. Then is E=P+P? Here P+P denotes the set formed by the sums of all pairs p+q with p,q in P. of P+P. It is already known that E=P+P*P*..P where P*P*P..P is some product of prime numbers. It is also known that every sufficiently large odd number is the sum of three odd primes. Vinogradov [1] proved this in 1937. It is also known, by computer search that any even number n is the sum of two primes for n < 400,000,000,000,000 (four hundred trillion in the American style).
Goldbach outlined this conjecture in a letter to Euler in 1742. Now the publisher FABER is offering a million dollar reward for anyone who submits a proof of the conjecture, or a counter example before March 15, 2002 [2].
Another interesting problem arises with functions which generate sequences of primes. For low limits the sequence x=199+210*j gives 10 consecutive primes:
199 409 619 829 1039 1249 1459 1669 1879 2089Euler knew that the function f(x)=41+x+x^2 gives prime values for x=0 to 39. This quadratic sequence is far better than any linear function.
The cubic function 29+117t-20t^2+t^3 gives primes for its first 19 values.
181+205t-28t^2+t^3 gives 20 consecutive primes, with repetitions.Mathematicians invented sets long before computer lanuages evolved. Sets are easy to copy. As a tool of thought, they are unprotected by legalese or copyright. They are public domain stuff. Mathematical terminology alone seems a sufficient deterrent. Given two sets A and B it is possible imagine a table of pairs (a,b). The sets may be people and cars, or stock market shares and prices. The set of all possible products is written A x B and a typical element of the set A x B is the pair (a,b) with a in A and b in B. The set A x B is called the Cartesian Product. The number of elements of A x B is c(A) times c(B). If A and B are different horse races, then the prediction of a winner from each race is called a double, and is simply a member of the cartesian product. It is also possible to define cartesian products on more than two sets. The product formula can be generalised. In particular the size of the cartesian product of k copies of A is c(A) to the power k.
For any given set X it is possible to define a 'relation' as a subset R of the cartesian product A x A. An equivalence relation satisfies three simple rules:
Reflexive If x in X then (x,x) in R.Given an equivalence relation then it is possible any x in X to define a unique subset of X containg x: class(x)={y | (x,y) in R}. This is called the equivalence class of x with respect to R. Any two equivalence classes are disjoint, for if a in class(x)^class(y) then (a,x) and (a,y) are both in R, so (x,a) in R by symmetry, and therefore (x,y) in R and in fact class(x)=class(y). Since the reflexive rule implies that class(x) contains at least x as an element it follows that every element of x is in at least one equivalence class. The set of equivalence classes is often written X/R. Since the classes are disjoint, and every element is in just one equivalence class we can write X = union {C(x) with x in X/R}.
Example:A person cannot exist in a modern state without being slotted into a cartesian product of some pair of sets. If a person is treated for a disease the bureaucracy maps that person into the cartesian product of (people x diseases). A relationship can be defined defined on the set of people by saying that (a,b) in R if a and b suffer from a given disease. Not only AIDS, but also multiple injuries will appear in clusters. Needless to say modern computers are incapable of doing justice to these types of databases. Analysis of these relationships is crucial in discussions on public financing.
Example: Consider education, with the relation R being determined by school. Two people a,b are in this relation if there is a finite set of people a=x[0],x[1],x[2], .. x[n]=b where x[i],x[i+1] both went to the same school. If a person x did not go to school at all, then let (x,x) in R. Then the population can be broken into a disjoint union of equivalence classes. When inter-school transfers are common there may be just a single class. When the education of women is forbidden then there are at least two classes. In a global world these classes are unlikely to correspond to geography. A classless society could be seen as a world where everyone participated in the same distance learning scheme. Attempts to make everyone participate in the same school _system_ have been tried many times. When schools have been dominated by competing religions there have occassionally been problems. Northern Ireland is seen by many as a classic example. If teaching becomes a less attractive profession, and the cost of internet connectivity falls drastically then the uneducated could become replaced by the 'media-educated'. Whoever dominates the media effictively controls education for large numbers of people.
When a set is finite the class equation can be written:Besides relations the subsets of A x B include the graphs of functions. A function from A to B written f:A->B is simply a subset G of A x B such that for each a in A there exists a single element b in B with (a,b) in G. For shares and prices the most common subset of the cartesian product is the price list, but other functions are possible: the price in two weeks time, for example, or the previous years low.
Example: Deoxyribonucleic acid, called DNA, is a molecule made up of components called adenine (A), cytosine (C), guanine (G) and thymine (T). Each of these components consists of a ring containing carbon and nitrogen and part of a sugar like molecule (ribose). As in many organic compounds the presence of nitrogen gives part of the DNA molecule an alkaline property, so the components are called bases, as opposed to acids. The base property helps in the formation of chemical bonds. The DNA molecule is a chain of any of the bases A,C,G and T in some particular order. The base pairs A,T and G,C also bond with each other in a natural way. RNA is similar to DNA except that thymine (T) is replaced by uracil (U). The AIDS virus is made up of RNA molecules. There are 16 possible base pairs and 64 triples. Scientists have determined that certain triples serve to encode amino acids, which are components of proteins and enzymes. There are about 20-22 amino acids which occur in the common proteins of life. The 64 base triples may be divided into equivalence classes according to the particular amino acid that the base encodes. Such triples are called codons.
There are also some base triples whose function is not explained. It is conjectured, on very strong evidence, that parts of the human DNA sequence has been incorporated from orther organisms at earlier stages of evolution. DNA is important because it is passed from parents to offspring. Sometimes there are small random changes, but often these do not matter: the encoding from 64 bases to about 20 amino acids implies a considerable degree of redundancy. Some very common amino acids will have more than one codon. The genetic code is a mapping:-
gc: CODONS -> Amino AcidsThe mapping is not one to one. Different organisms make the same proteins for similar functions, but they use slightly different base sequences. People can estimate a difference between DNA sequences for making a given protein by counting the different numbers of base pairs. If A and B are different DNA sequences, then it is possible to define a distance function d(A,B). For three bits of DNA A B and C the distance functions satisfy d(A,C) <= d(A,B) + d(B,C). This is the triangle inequality and it corresponds with our normal notions of space.
DNA space is like normal space with its streets and alleyways. Species evolution is a set of trajectories in DNA space. A very tiny part of this story forms the major scenario for Julian Rathbone's political novel TRAJECTORIES. Einstein used Riemann Geometries to track real space, and similar tools from analysis and topology will be needed to explain DNA space.
DNA space is a real battleground. AIDS, TB , Cholera and Malaria with its associated sickle-cell syndrome are some of the trajectories. Evolution on computers is usually modelled by some sort of function iteration and the mathematical discipline of dynamical systems can provide some of the theory to check out the results. The French mathematician Poincare invented dynamical systems about one hundred years ago.
One of the most fascinating scientific ventures of the twentieth century was the human genome project. Human DNA is estimated to have about 2.1*10^10 bases. Each DNA molecule may therefore have over 10^10^10 different configurations. Biologists often joke about GOD as the 'generator of diversity'. Following investment by some private companies the Human Genome Project is almost finished. In parallel efforts are being made to elucidate the DNA sequence of micro-organisms.
This involves intensive number crunching and the employment of skilled scientists. Biotechnology research is controversial and it has become easy to mobilise demonstrations against some applications such as genetically modified foodstuff (GM foods).
The real issue is the expense of the research, its financing and its public accessibilty. Riemann and Poincare published their results in academic journals but one hundred years later corporate lawyers are fighting to prevent the publication of knowledge of the geometry of DNA space. They are bahaving worse than the wretched priest cults that tried to monopilise predictions of the Nile flooding in the days of the pharoahs. The fact is that DNA space is so complicated that they need the help of the best minds in the world to sort out the navigation problems.
The corporate elite that seeks to monopolise knowledge of DNA space are exactly the same types that persecuted Linus Pauling, the first person to explain protein structure. These business leaders who wish to monopolise knowledge have allies amongst many conservative religious people who also want to control access to knowledge.
Just as Newton may have been concerned about stability of the heavenly bodies, so modern scientists are concerned about other types of orbits. These are trajectories in DNA space, and also trajectories in economic space. This last figure is really a consequence of human demography. Genes and DNA sequences need far more than mere human populations to express their differences. Micro-organisms have much larger populations than humans. This is good for evolution, because certain things must happen with very large populations. Just how the size of a population may force apparent coincidences is an application of the specialisation of set theory known a Ramsey Theory. The simplest statement in Ramsey theory concerns meetings with six people. In such a meeting there must be either a clique of at least three people who are mutually acquainted, or else there must be at least three people all of whom are complete strangers to each other.
DNA space may seem a very abstract entity, but other people's failure to understand this could get you locked up in jail for an indefinate period of time.
The police take DNA samples from the general public following a particularly brutal and horrific sex murder. They then send the samples to a laboratory whose corporate owners have a particular financial interest in incarceration as an industry, just like Stalin's Gulag system of the 1930s and 1940s. The laboratory is staffed by underpaid and demoralised workers who use their computers to map areas of DNA space defined by samples found near the dead body. Tabloid sensationalism put the police under intense pressure to make an arrest, so they pick up the person with the nearest DNA match. The victim's brother, or a parent are a good guess, especially if the victim is in a minority group. An innocent man is condemned, and those who seek to be morally outraged look for other people to persecute.
Modern states seek to gain credentials by cataloguing the DNA of criminal suspects. At least we know that DNA space is a little bit like Euclidean space.
As it happens functions got invented well before 'Set Theory'. Functions are like the act of copying even before the media was there. Functions were linked with procedures for evaluating numerical quantities, but methods like table look-up are perfectly good. If you have a graph G as a subset of A x B and you want to find the value for x just search the graph G for the correct pair (a,b) with a = x. Call the corresponding value f(x).
If A,B,C are sets, and f:A->B and g:B->C are functions then it is possible to define a composition gf:A->C so that gf(x) = g(f(x)). A function is one-to-one, or injective if f(a)=f(b) only if a=b, and it is onto, or surjective if every b in B is the image f(a) of some a in A. A set is finite if c(A) is an ordinary number. In this case it is possible to define a function from A to the sequence {1,2,..c(A)}. This comes from any listing the different elements of A as a[1], a[2] .. a[n] etc. The graph of a function f:A->B can then just be written b[1],b[2]...b[n] where each b[i] is in B. That is the set of possible graphs is just the cartesian product of c(A) copies of B.
If there is a function f:A->B which is 1-to-1 and onto then A and B are said to be in one to one corrspondence. Each b in b must be the image of a unique a in A with f(a)=b. Define the function g:B->A such that g(x) is the unique y with f(y)=x. Then g(f(y)=y for any y in A, and f(g(x))=x by definition of G. The function g is said to be an inverse of f. A set X is said to be countable if its elements can be put in 1-to-1 correspondance with the set of counting numbers {1,2,..n,...}.
The set of positive whole numbers {1,2,3... etc} along with 0 and the negative numbers is a set. This set is usually noted Z, the symbol coming from the German word 'Zahlen' for numbers.
Notice that Z has perfectly good 1-to-1 functions that are not onto. The function n->2*n is 1-to-1 but its image excludes the odd numbers. Similarly the set of positive numbers {1,2,..} has one to one functions which are not onto. The obvious choice is the sucessor function n->n+1. There is a hotel plan named after this fact. The 'Hilbert Hotel' can accomodate new guests even when all the rooms are full. It is also possible to double the population even when all the rooms are full. If a single guest arrives just tell every other guest to move up a room. If a huge crowd arrives, just tell everyone to move from room number n to number 2*n. Perhaps not very practical but at the end of the twentieth century many politicians still seem to think they can expand prison populations this way. If A and B are two countably infinite populations then the Hilbert Hotel can also accomodate the set of pairs A x B, with each pair in a single room. With a finite set of rooms, double occupancy eventually becomes necessary. This is called the Dirichlet Box principal. For a hotel of size N with N+1 guests, at least two must share.
The set of functions from A to B is itself a set. If A has c(A) members and B has c(B) members then the size of this set, Map(A,B) is c(B)*c(B)*c(B)...*c(B) multiplied c(A) times. In particular the set of functions t:A->{0,1} has 2^c(A) elements. This is also the number of subsets of A. For each X which is a subset of A define the function tX:Z<-{0,1} where tX(z) = 1 if z is in X and 0 otherwise.
When A is a finite set the set Map(A,A) of all functions f:A->A has c(A)^c(A) elements. if f:A->A is one to one, then the set sequence of function values f(a[1], f(a[2]).... f(a[n]) defines f where n=c(A). There are n ways of selecting b[1]=f(a[1]) but only n-1 choices for b[2]=f(a[2]) since the function is 1-to-1. Similarly there are only n-2 ways of chosing b[2]=f(a[2]) and so on. The number of such mappings is n(n-1)(n-2)...2.1 or n factorial. This is often written n!, or n-shriek. This quantity is also known as the number of permutations of n objects. If s and t are two mappings of the set N={1,2,3..n} to itself then it is possible to define the composition of the two functions r=st by the formula r(i) = s(t{i)) for each i in N. The permution i:N->N given by i(k) = k for all k is called the identity permutation. For the set N the number of mappings in Map(N,N) is n^n. Given any mapping s in Map(N,N) and any k in N the set {k, s(k), s(s(k), s(s(s(k))), ,.m terms) is a subset of N. When n=C(n) is finite and m>n then two of the iterated functions (s^j1)(k) and (s^j2)(k) must be equal. This essentially means that the sequence k[n+1]=s(k[n]) repeats itself after a certain number of terms. This set is called the orbit of k under the mapping s.
The element e such that ae=ea=a is called the identity element. The element a', corresponding to a such that a'a=aa'=e is called the inverse of a.
Example: The set {1} with 1*1=1An Abelian group satisfies all of the axioms of a group, along with one further rule.
G4: ab = ba for all a,b in G. Commutative axiom.In the case of an Abelian group the law of composition is written as '+', and the identity element is called zero, and given the symbol '0'. It is said that the Hindus were the first to write about this feature of ordinary numbers. The additive inverse of an element a is usually written -a.
A ring R is an Abelian group, with an additive law of composition '+', and an additive identity, 0. There is a further law of composition for pairs of elements r,s in R known as the product. The product of the elements r and s may be written rs. Rings satisfy the following axioms.
R0: For each r, s in R the product rs is in R.Example: The counting numbers 1,2,3 etc along with zero and the negative numbers -1,-2,-3 etc. form a commutative ring with an identity element 1 under the usual laws of addition and multiplication.
If R and S are two rings, the cartesian product R x S can be made into rings by defining componentwise addition and multiplication. The sum and product of the pairs (r1,s1) and (r2,s2) are (r1+r2,s1+s2) and (r1*r2, s1*s2) respectively. More generally if X is any set the set of functions f:X->R can be made into a ring by defining sums and products elementwise. if f and g are functions in Map(X,R) define (f+g)(x) = f(x)+g(x) and (f*g)(x) = f(x)*g(x). These definitions create the necessary ring structure. When R is the ring with two elements the set Map(X, Z2) is a ring. There is a 1-to-1 correspondance between elements of this ring and subsets of X. If A is a subset of X then define (tA)(s)=1 if s in A and 0 otherwise. If A and B are two subsets ((tA)*(tB))(s)=1 if and only if s in A^B and (tA+tB)(s)=1 if and only if s is in A or B but not both (s in A u B-A^B). This shows that the set of subsets of a set X may be made into a ring. This is known as the Boolean Algebra of the set X. Boolean Algebras are important in Mathematical Logic, and also integrated circuit design.
In any ring with a 1 then the following identity holds. This is known as the binomial theorem.
(1+x)^n = 1 + nx + n(n-1)/2!.x^2 + n(n-1)(n-2)/3!.x^3 ....Here the general term is n!/k!(n-k)! x^k The quantity n!/k!(n-k)! is known as the number of combinations of k elements taken from n objects. It can be written as C[n,k].
Proof: By induction. Show that if this theorem is true for n, then it is true n+1, and therefore all n.
With n = 1, then (1+x) =1+x.To come back to the product of rings, consider the simplest ring Z2, Then Z2xZ2, Z2xZ2xZ2, .. etc form a family of rings. The members of Z2xZ2 may be written as the set of pairs {(0,0),(0,1),(1,0),(1,1)}. Alternatively the elements could be written {0,x,y,x+y} with z+z=0 and z*z=z for all z.
The elements of Z2xZ2xZ2 may similarly be listed as 0, x, y, z, x+y, x+z, y+z, x+y+z. In terms of co-ordinate listing the quantities x,y,z stand for triples (1,0,0),(0,1,0),(0,0,1). The sums of any pair of these elements is a triple with 2 1s. The values x,y,z represent the vertices of a cube, and the set x+y,y+z,x+z are the next vertices as you move to the opposite vertex to (0 0 0). The number of points at distance d from the origin at (0 0 0) correspond to the binomial coefficients in (1+x)^3. For any number n the product Z2xZ2...xZ2 n times is exactly like the vertices of a hypercube of n-dimensions. The points can be represented as n-tuples (0 0 ..1 ..0) etc with 1s in certain positions less than n. The total number of elements, or vectors as they are known, is 2^n and there are n with single 1 co-ordinates, n(n-1)/2 with two non zero coordinates and so on. The general formula C[n,k] for k non-zero coordinates can be proved by induction similarly to the binomial theorem above.
Example:A winning line is a set of six numbers from the set {1 .. 49}. There are 2^49 such subsets, but only those with six elements are in a draw. The six element sets correspond to 49-tuples in Z2 with just six non zero co-ordinates and there are just C[49 6] of these. This value is 49.49-1...49-5 % 6! or as calculated by this formula
(PRODUCT 49-i6) % PRODUCT 1+i6 = 13983816Here PRODUCT X just gives a high precision product of a set of numbers X. The Index generator 'i' or 'iota' in APL simply gives a set of consecutive numbers. i6 is the set {0 1 2 3 4 5}. The chance of a random draw being correct is about 14 million to one, but the payout is often at much lower odds. The lottery operator accepts unlimited losses by the public, but the operator shares a single jackpot among the lucky winners when it loses.
The binomial theorem gives a way of counting subsets of sets. It also holds true in any ring R. When R = Z, the set of integers the first fiew lines of the triangle look like this. Each number is the sum of those immediately above.
When R=Z2, the field of two elements, you get a triangle with holes in it. The study of objects with holes in them includes a specialised field of mathematics called topology. Early topologists included Hausdorff and Sierpinski. Their major concern at the time was the search for counter examples in theories of continuous functions. The early efforts were motivated by the search for space filling curves or conversely, well behaved 'perfect sets'.
The set of integers Z is a ring. For any non zero number m it is possible to define an equivalence relation on Z such that two numbers x and y are equivalent if and only if the difference x-y is divisible by m. For any number n > m it is possible to find numbers q and r such that
n = mq + r with 0q is the quotient and r is the remainder of division of n by m. Since n-r = mq we see that n is equivalent to r, a number which may take only one of m values. For given n the remainder r after division by m is often called n mod m. The number of equivalence classes is finite, and each equivalence class is generated by the set { r+mq | q in Z}, often written r+mZ. It is possible to define addition and multiplication on the set of equivalence classes.
(x+mZ) + (y+mZ) is (x+y mod m)+mZThese classes are called the integers modulo m, often denoted Zm. The ring of equivalence classes is also written Z/mZ.
The values of integers are stored in a computer memory. Calculations are done with a processing unit. Most present day computers do arithmetic on the set Zm where m is a power of two. Old computers like the APPLE II had a 6502 processor which only did 8-bit arithmetic. The numbers 0-255 were divided into the numbers -127 through to 0 then 127. The value -128, represented as a byte like 10000000 was taken to be minus infinity. Normal additions and subtractions were carried out modulo 256.
Apple's technical genius, Wozniak developed sixteen bit arithmetic by software. He called this sweet sixteen. But even this system has its limitations. Numbers greater in magnitude than 32000 cause problems. The computer hardware recognises equivalence classes of numbers, rather than the numbers themselves. Five kilometers looks like five miles. NASA programmers found this out the hard way. A recent Mars module got an excessively hard landing because of this type of mistake.
Computers cannot really do accurate arithmetic. Programmers can make the computer do arithmetic to a high precision, but there will always be problems which cannot be cracked. All the people that received erroneous bills or bank statements are fully aware of this fact. If numbers cannot be dealt with in a satisfactory manner, then why not just use the computers to sort lists of names and addresses as in 1960s style sales ledger programs ? If the computer can manipulate text strings then it should also be possible to implement math systems which deal with all of the orders of infinity invented by Georg Cantor and others.
Formal mathematics anticipated these developments in the 1930s. Church, Godel, Post and Turing contributed very much to the human achievements of the twentieth century.
Every schoolchild should have opportunity to learn a little about symbol manipulation systems. The old fashioned business of solving equations is an excercise in doing this. But solving equations often makes use of rules which may not be accurately stated. The material will seem divorced from reality to many. Time to think is important.
An equation for solution involves one or more symbols representing unknown quantities, usually numbers. If a pupil does not know how to divide, or possibly to extract square roots, then there seems little purpose in the excercise. That's where symbolic manipulation comes in.
To solve ax=b just write x=b/a. This is an example of a formula. Perhaps one of the best known recipes of formula is an Indian text called the Kama Sutra. It's a classical sex manual, and the literal translation is simply Love Formula. Sutra is comes from the Sanskrit root word for formula, while Kama corresponds to physical love.
Maxwell's equations are formulae connecting the electromagnetic forces, and from these come the theory of radio and television and the ability for people to watch pornographic broadcasts worldwide. The formula is the connection.
When thinking about formulae forget the calculations for a while and reach for the heavens.
If R is a ring, it is possible to add symbols to the ring so that the ring axioms R0 to R4 are satisfied. Just add the symbol X and call the new ring R[X]. The element X satisfies 1.X=X.1=1 and the value X*X is written X^2. Multiplying n copies of X is written X^n. The elements of the ring R[X] are written as sums a[0]+a[1]X+a[2]X^2 etc. with only a finite number of terms. Addition of two ring elements is done componentwise, while multiplication is achieved by collecting equal powers of X.
In the old days algebra was taught in schools by giving the pupils hundreds of excercises such as:
Multiply 1+2x+5x^5 by 3x^2+2x^3.These excercises were often chosen to avoid difficult theoretical points such as the non-existence of factors. There were often circumstances where the teachers would not have time to deal with these difficulties because of political and economic constraints. A media dominated world will see these difficulties considerably increased for the next generation of teachers.
Excercise: Verify the following identities.Some classical formulae attracted great controversy. The first was the formula for the area of a triangle. If A is the area of a triangle with sides a,b,c, then the quantities are related by the equation:
A^2 = s(s-a)(s-b)(s-c) where s = (a+b+c)/2.Effectively the area is given as the square root of a number. This confounded some ancient philosophers, although the formula is quite accurate and practical. The formula may be re-arranged to eliminate s. In the past students and mathematicians would use pencil and paper to do the calculations, but nowadays it can be done on the computer.
$ AOS"(a+b+c)(a+b-c)(a-b+c)(b+c-a)"Systems such as MAPLE, MATHEMATICA, MATLAB , TK-solve and so on can do most such calculations in an instant, while the user is given time to think if using the software which should accompany these notes. Whatever the reader may think of the calculations, the symmetry of the formula should be quite evident. The values a,b,c represent the lengths of sides, or distances in space. When a=b=c it is easy to see that 16A^2=3a^4 or A= #sqrt(3/2)a^2 , where #sqrt stands for square root.
In fact the formula as written looks valid for all sets of numbers. However if a,b,c represent the lengths of a normal triangle then we know that a<=b+c and the same for the other sides. This is certainly true in Euclidean space, and also DNA space.
Two of the most famous formulae in history are:-Pythagoras Theorem is easy to prove by algebra. Given a right angled triangle with sides of length a,b,c with a<=b<=c construct a square of size a+b, and fit in four triangles and a square as shown below.
This proof is reputed to have originated in China.The triangle ABC may be split by drawing a perpendicular line from A to its opposite side. If the numbers a,b,c represent the lengths of the sides and x,y the distances BX and AX then we have :
Area = A = 1/2 * base * height = ay/2In May 2000 a Phillipino hacker released the Love Bug virus. More exactly the offending code was a 'Visual Basic Worm'. The worm was easily able to propogate amongst the computers of governments and large corporations. Following attacks on capitalism by demonstrators in Seattle, Washington, London and Chieng Mai governments and lawmakers became unduly sensitive and concentrate more forces on tracking school kids than they ever put into catching Osama Bin Laden.
The Lovebug worm is a script written in Visual Basic which is activated when users of Microsoft Outlook open e-mail attachments. An e-mail attachment is an encoded package that comes along with a plain text message. There has never been any guarantee that e-mail attachments are wholesome. Indeed many in large organisations will exchange pornographic images and suchlike. It is rumoured that the Lovebug Worm cost it's victims up to one billion dollars in lost business. The world probably benefitted because much of this so called 'business' is leading to environmental degredation and oppression of the poor. The writer, Ramel Ramirez did the world a favour.
The ease of propogation of the worm highlights the incredible stupidity of corporate fascism. The genetic sequencers promise cure for malaria or AIDS by their understanding of DNA but the industrial complex is unlikely to deliver such results quickly. What they are really seeking is loads of money immediately, and the benefits to humanity will 'trickle down' towards the poor.
Modern medicine has made vast advances in explaining immunity, but the explanations offer little consolation to individuals. What happens is that immunity works in whole populations [4]. Those who are immune to a disease act as a barrier to the propogation of the disease causing pathogen. In fact only a relatively small proportion of the population need immunity to drastically curtail disease propogation. This effect is called 'herd immunity'. Herd immunity was important in preventing the Love Bug worm doing any real harm. Users of computers whose managers were not addicted to Microsoft Office and it's cancerous overgrowths were completely unaffected.
These ideas about immunity come from physics. Boltzmann and Maxwell contributed to the kinetic theory of gases which describes the movements of gas molecules as random walks with very predictable effects for the extremely large numbers of molecules involved. This random motion of molecules can also be observed with a microscope, and is named Brownian motion after the discoverer. Temperature is related to the velocity of motion of molecules.
Mathematicians had been working on solutions of the heat equation since the 1820s. This equation and its solutions arise in areas of research ranging from the Theta Functions of Jacobi to Wall Street derivatives market.
Following the success of the kinetic theory of gases the same sorts of calculations were applied to newly discovered particles such as protons and neutrons. These calculations were particularly important in predicting the chain reactions involved in nuclear fission. Neutron absorbers and reflectors became key components in nuclear detonators.
The same models are applied to population dynamics. The modern interest in ecology and species diversity shows that people can be mobilised to express forceful views on possible absorbing and reflecting barriers in DNA space, but many of the most powerful players are sadly deficient in presentation skills. They also leave their own computer systems vulnerable to attack.
Random walk explanations start from very simple models. The first model is the simple coin tossing game with sequences of heads or tails. A sequence THHTH.... etc can be translated into motion in many different ways, but the simplest is simply counting the number of occurences of one of the faces, which is considered a success. This gives an ever increasing sequence, which can be translated into the motion of a point just moving forever in one direction at an irregular velocity.
It is possible to ask the probability that the point has moved k steps in n trials, and also where the point is most likely to be. With one trial then there is either no motion, or a displacement of one step. With probabilities of success and failure given the values p and q the number of successes S is 1 with probability p and 0 with probability q. This can be written pX+q where X is symbol. When p=q=1/2 then just write 1+X. It then happens that all the probabilities of moving k steps in n trials can be obtained by reading of the coefficients in the series (1+X)^n, dividing by a factor of 2^n. There are many proofs of this in textbooks, but the idea is to reduce statements about motion to features of polynomial multiplication. Polynomials that correspond to movement or growth are often called generating functions. What physicists did was to go and take straight limits giving things like the bell curve much loved by certain statisticians and social scientists.
Whatever the nature of the proof the theory works well in practice. Any player of backgammon or monopoly will be aware that seven is the most common total of two dice. The relative probabilities of the totals are given by reading off the coefficients of
(x+x^2+x^3+x^4+x^5+x^6)(x+x^2+x^3+x^4+x^5+x^6)This gives seven as the most common outcome. Other features of generating functions are their ease of application. The rules don't change across different computer or language systems. As it happens genetics follows the same patterns. Going from one generation to the next various outcomes can be read off as coefficients in a polynomial. Many scientific processes are explained by taking limits of these coefficients, but DNA space is different. Just like physics has string theories, so undoubtedly we will see non-archimedian metrics applied to DNA space.
In the game of Bridge each player gets 13 cards from a pack of 52. Decisions are made on point counts for high cards. The most frequently used system counts 4 for an Ace, 3 for a King, 2 for a Queen and 1 for a Jack. Zia Mahmood publicised a generating function for this point count in The Guardian.
(1+y)^36 (1+xy)^4 (1+x^2y)^4 (1+x^3y)^4 (1+x^4y)^4Zia posed a question about these distributions in his Christmas competition which is published every year in the Guardian. He got several exhaustive analyses in his responses along with correct answers on bidding and playing hands. The formula was sent by Dr Jeremy Bygott of Oxford. The coefficient of y^13 gives the generating function for point count in a 13 card hand. This is a beautiful solution.
An array is a set of numbers arranged in lines, often called rows or columns. This meaning of the word is relatively new.
In feudal times the King could call on his vassals to provide arms and men in times of war. The managers of these mixtures of unwilling conscripts along with vain and boastful knights were called 'Lords of the Array'. The Chinese lost Hong Kong and Shanghai because they adhered to this system well into the nineteenth century. The British themselves fell victim of these feudal hangovers when a coterie of upper class generals, collectively termed 'The Donkeys' went and ordered hundreds of thousands of working class men to charge German machine gun emplacements during the First World War. These generals had learned nothing from the losses of the Chinese during the Opium wars of the 1840s and 1850s.
In the 1700s Euler investigated the problem of '36 army officers'. There are six ranks of officer, and six regiments. The idea was to arrange these officers in a 6x6 square so there were no two from either the same rank or regiment standing in line.
In 1782 Euler conjectured the problem was impossible.In 1900 G.Tarrey showed that Euler had been right through a brute force enumeration of all 6x6 latin squares.
Godement's treatise on Algebra, written in the 1960s contained tables showing the number of bombs dropped by the Americans on Viet-Nam. An excercise asked the reader to verify the associative laws of addition by adding up items in these tables.
More recently Ian Stewart described a military array problem in his regular Scientific American column [3]. The commander in chief (C-in-C) inspects his troops arranged into a certain number of squares. The C-in-C then takes his place in the army whose order of battle is a single huge phalanx, which also happens to be a square. They are unlikely to win the battle this way, but the size of the army is what counts here. The C-in-C inspects 61 equal squares of troops. The size of the army is the lowest integer solution to Y^2 = 1+61*X^2.
The equation Y^2-M*X^2=1 is known as Pell's equation. It is a perennial favourite for maths puzzles.
Excercise: Try the problem with 61 squares, then ... 9349.Tensors and spinors on are just special types of array which are used to explain 20th century physics. In a sense tensors are like mixtures of functions and numbers. Partial derivatives feature in many formulae involving tensors. If you are into pure maths, look at any good book on Quadratic Forms. These will give you the group theory.
Following the success of the French Revolution the new regime co-opted some of the best scientists of the day to supervise reform of the calendar. The new year was divided into months with names drawn from European folk tradition. The English made fun of this innovation by adding their own meanings. The new French years ran from September to August.
Table YEAR_ZEROThe French calendar was worked out by a committee of experts which in included the great mathematician Laplace. Months all had the same number of days. Extra days were added between some of the months to give public holidays. Every leap year would have an extra day which could celebrate the revolution.
Laplace ensured that his role on the calendar committee was not too prominent because it bacame quite dangerous to express views for a while. Two other members of the committee went to the guillotine, and when Lavoisier the chemist was condemned his judge went on the record as saying that the revolution had no need for scientists. Nowadays Laplace is best known for Laplace's Equation which is of very great importance in electro-magnetic theory.
The extremists of the time, called Jacobins, brought in two important laws that acted as an influentual political model for the future.
(1) Law of the Suspect.Any person suspected of not giving one hundred support to the new regime could be arrested and brought before a Revolutionary Tribunal. If that person were convicted of being a counter revolutionary then there was a mandatory death sentence.
(2) Law of the maximum.The government could set maximum food prices by decree. Distributors who put up prices beyond the limit were obviously to be treated as saboteurs of the revolution.
The Jacobins also wanted to abolish torture. Their death sentences made use of the newest technology of the time: a heavy free sliding blade invented by Dr Guillotine. There is some evidence that Guillotine's technology was rather more appropriate for executions than the new fad introduced by the Americans one hundred years later. During the 1890s debates between Siemans and Westinghouse over A.C. or D.C electric power systems the Americans tried out wiring up condemned criminals to the electrodes; they are still debating this application of technology one hundred years after that. A cynic, or maybe Theodore Kaczynski, might well add that it is a pity that Laplace did not go to the guillotine as well as Lavoisier.
The Revolutionary Calendar lasted until one important date. This was the coup which saw some members of the Comittee of Public Safety gang up against Robespierre, to save their own lives. These plotters obtained the assistance of a young French army officer called Napoleon Bonepart.
It is fairly sure that Laplace was not the only astronomer who needed to be aware of the politics of the time. Most civilisations have had professional astronomers from ancient times. Some of them must have tried to get funding by predicting the fortunes of big-shots. Kepler himself is reputed to have cast a horoscope for Wallenstein, a famous European Warlord during the Thirty Years War (1618-48).
Napoleon instituted reforms in French education which are very much part of the scene today in 2010.
The Volterra equations remain unsolved at the time of writing. Volterra himself was directly interested in these equations. His son in law analysed statistics on the Adriatic fish catch. Population modelling has generally not received much funding. There is much commercial and political pressure to supress this sort of knowledge. Industrial fisheries are a case in point. Scientists are generally ignored when they recommend prudence while robber barons are keen to exploit the benefits of science to follow fish shoals with satellite navigation aids. In the meantime the law enforcement agencies never use high technology to catch these kleptocratic industrial pirates but turn their guns on desperate people trying to either to migrate or to ship 'illegal drugs'.
The current day meaning of a list of numbers only became common towards the end of the last century when mathematicians started a systematic study of linear equations and invented matrices, determinants and tensors. The British mathematicians, Cayley and Sylvester, were well known contributors to this field.
Ada, Countess of Lovelace [1], was an early pioneer of machine oriented calculations on arrays. One particular array taxed her mind. She worked on programming the calculation of the Bernoulli numbers, These are found in the power series expansion of cos(x)/sin(x), and x/(exp(x)-1).
Babbage had made a theoretical design of a computer to do calculations and although his computer never worked the period saw a whole lot of inventions for storing programs. These early methods included rolls of material with holes in them, or rotating drums with pins sticking out. The two methods correspond to male and female, Yin and Yang, or N and P layers of semiconductors which stand for the absence or presence of electrons.
The popular stored program machines of this era included textile machinery and clockwork music boxes. The Player Piano was perhaps the most elaborate such device. There were also early attempts to deceive the public into believing in 'chess playing computers' where the sponsor would hide a dimunitive chess master in an engine and get people to play chess against it.
Einstein needed tensors in order to construct the Theory of Relativity. From that day the 'Lords of the Array' became completely reduced to the ranks of the proletariat. Arrays and matrices were only taught in advanced maths and physics courses, and they had more theoretical than practical use. Numbers were stored by writing them on paper.
It was still easier to comission armies of manpower to handle vast calculations. Disney studios employed hundreds of artists for its animations. The Atom Bomb project used hundreds of human calculators to solve some of the differential equations.
The 'National Emergency' of the Second World War saw great advances in computing and management science. The computing community split into two with 'Partial Differential Equations' opposed to 'Commercial Data Processing'. There were those who naively believed that increased computing power could make for a better society. The idea of economic planning had been OK in wartime, and food rationing schemes had sometimes been worked out on theoretical calculations. China and Russia both had governments which paid lip service to the planned economy. Whole systems of input-output equations had been invented to describe the economy. This was often called Marxism.
Unfortunately for Soviet and Chinese statisticians these equations and the input data were too hot to handle because any data which reflected poorly on the performance of the regime would be supressed, and cause the statisticians to be imprisoned or shot.
The earliest advocates of a planned economy often had to leave their countries in order to save their own lives. Karl Popper attempts to analyse the reason for this in his book 'The Open Society and it's Enemies'. Both Stalin's Russia and Hitler's Nazi Germany embraced state planning. Thinking men voted with their feet.
Nowadays statisticians do not run such high risks of being shot. Offending data is buried in ordure. There are plenty of subservient think-tanks to produce all sorts of idiotic reports to justify government policy. Nevertheless some data is so sensitive that people will be persecuted for revealing it. Drug prices, and food and drug safety issues are jealously guarded, and there are many half forgotten cases of alleged corporate manslaughter.
Stanley Adams, a former Hoffman La-Roche employee, lost his liberty and his wife when he revealed the drug company's European price fixing arrangements. Planned profit is precious.
Also in Europe there have been several suspicious deaths of frontline data collectors in the field of vetinary medecine. The additions of hormones and antibiotics to animal feed have raised concerns about health for decades, and the agri-business sector seems to have heeded these concerns by adding good old fashioned shit to the diets of the animals.
The reader will not see many arrays in government reports, or elsewhere, including the Internet. More often reports are represented as graphs or bar charts with carefully concealed scaling information. The array that a scientist may want to access could be buried in megabytes of Post Script code.
In real life data arrays are hard to accumulate. It needs some discipline to keep figures for a time series for example. Budget cuts and privatisation are the enemies of the modern statistician. Megamergers between rival pharmaceutical companies also contribute to the chaos. Any array has an associated size, normally the number of elements in the array. This is important. Sometimes new drugs and medical treatments are advocated even though there are more doctors doing the research than patients whose treatment is being evaluated.
With the increasing power of computers it gets easier and easier to maintain statistics but most of them only reflect the enrichment of the elite: the profits of the big corporations and those of their immoral dealings which they wish to disclose to their old-boys club style regulators that run the World's capital markets. In the meantime those responsible for the data are cajoled into efficiently operating the machinary, rather than operating their brains.
Bernoulli numbers come from from long division by power series They can be used to estimate sums by integrals. The most famous formula in which they arise is Stirling's Approximation for n factorial (n! = 1 x 2 x 3 .... x n-1 x n). Bernouilli numbers are coefficients in the expansion
x/(e^x-1) = Sum b[n]x^n/n!The Bernouilli numbers are calculated by inverting the series (exp(x)-1)/x. A method which minimises the use of long division is in the D4 script nm.d4f. Fast methods of computing these numbers with pencil and paper were pioneered by Ramanujan who was reputed able to do such calculations in his head. He was certainly able to calculate the first sixty or so. This was quite a feat in the days without computers, and eighty years after Babbage.
Ramanujan gave his own name to a sequence of numbers. Theses are called the Ramanujan Tau Numbers and they are the coefficients of the famous product formula:-
Delta = q * Product (1-q^2n)^24It is possible to calculate such products by repeated polynomial multiplication but it is more interesting to rearrange the product via logarithmic differentiation.
F(x) = Product (1-x^n)^kThe right hand side, A(x) is a power series whose coefficients are various 'sum of divisor functions'. These coefficients can easily be determined by sieve methods and a recursion formula allows evaluation by brute force At about the same time as Ada and Babbage were struggling with the invention of the computer, a young German mathematician called Jacobi was inventing theta functions and modular forms. Jacobi arrived at some amazing identities relating power series and infinite products.
The zeros of Zeta(s) are distributed in a T shape in the complex plane. Some are on the negative x axis, y=0, x=-2n and others are in a strip 0 The terms in the definition of Zeta(2) occur in the equations for quantum levels of spectral lines. More recently physicists have been checking resonent frequencies in dynamical systems and getting numbers that look like the distribution of the known zeros of the Zeta function. All of this work is motivated by a million dollar prize for a more precise characterisation of the zeros of the Zeta function. The Riemann hypothesis states that all the zeros not of the form s = -2n are of the form s = 1/2+it where t is real. As yet no one knows if this is true. During the search for a proof of Fermat's conjecture quite an elaborate theory of Adeles was constructed. All adeles corresponded to number fields and specialised zeta functions. These algebraic structures all had a topology and the interesting ones contained roots of unity.... solutions to X^n - 1=0. Chemists and physicists were used to working with poorly understood topologies to describe intractable results in real life. String theories are examples of weird topologies used to describe the real world. The theory of dynamical systems which gave fractals and explanations for chaos has been some advance on Galileo's theories on projectiles because it is adaptable to quantum theory. Quantum theory makes the maths of physics quite complicated if things like angles and space time co-ordinates are all integer sequences. Many states are impossible by easy calculations while other states may be subject to intractable calculations. Schrodinger's cat is either alive or dead so uncertainty is replaced by a wave function. Some very profound problems in mathematics have very simple definitions. A classic example is the '3n+1' problem or the Collatz Conjecture. For example 5 is odd so f(5)=3*5+1=16. f(16)=8, f(8)=4, f(4)=2 and f(2)=1. We also have f(1) = 4. For any function f and starting point x the sequence x, f(x), f(f(x)) .... is known as the orbit of x under f. In the case of the '3n+1' function these orbits are also known as 'Ulam Numbers'. The question concerns the eventual behaviour of such sequences: do these 'Ulam Numbers' always converge to the cycle 1 2 4 ? At the time of writing no one knows. A dynamical system is a pair (X,f) where X is a set and f is a continuous function f:X->X. For example X may be co-ordinates and momenta of sun and planets and f a rule for computing the state of the system at time t+1 given its state at time t. For any given start x[0] the sequence given by x[n+1] = f(x[n]) gives the ultimate behaviour of the system. The French mathematician Poincare invented dynamical systems. Many important problems of maths, physics and even economics and ecology can be reduced to the study of dynamical systems. Since 1986 dynamical systems have served to illustrate the limitations of mathematics as a means of predicting the real world. The simple system z<-z^2+c defined on the complex number plane is enough to generate the Mandelbrot set. The fractal images that people see are the complement of the Mandelbrot set. The colours reflect the amount of computation required to show that the iteration z<-z^2+c diverges. The fragment of C-language code generates the pixels of at the border of the Mandelbrot set. During 1984 I was working in Bangkok, Thailand. I had just completed a Thai language spreadsheet application, called 'Thai Calc' and was now working on an industrial application. Bangkok had its attractions, despite the horrendous traffic jams. The hotels which catered for Western visitors often had well stocked bookstores near to the lobbies and I would often sit in the coffee-shop of such a hotel and read an uncensored copy of the Scientific American This was a significant improvement on Saudi Arabia where the Scientific American might be available from bookstores, but often with pages cut out because of censorship. One issue contained an article by Steven Wolfram explaining 'one dimensional cellular automata'. The simplicity of the algrithm was staggering. It essentialy goes like this. For each line X[n], compute X[n+1] via the formula:- Here the integer array K represents a mapping from the set of numbers 0,1,2,... N to itself. If KEY=0 1 0 1 0 1 .... then the linear automaton represents multiplication of a polynomial by sucessive powers of 1+X+X^2 (mod 2). Other keys give remarkably complex pictures, along with many more images which just look like microscopic views of cement or concrete. Firstly I tried to program a computer to do an ASCII graphic of the evolving cells. This was easy enough with block graphic characters such as shaded rectangles and punctuation marks but as soon as I tried to display colors the program gave up. At that time I was quite keen to use ANSI escape sequences as specified in thousands of existing termcaps databases, I had access to NEC machines at the office and they all had a nice colour graphics screen. Japanese chipsets included good high resolution graphics chips at an early stage because they wanted to render their own written language in a beautiful form. By contrast the newly arrived IBM PC came with a horrible color graphic adaptor which would give the user a headache after about ten minutes. I had written a simple BASIC program to generate lines of the cellular automata, and then to add escape sequences to color individual charecters on a line of output to the screen. The escape sequences worked well on short sequences of text, or any text with only a small number of color changes. It took a considerable amount of time to find out that the program was generating the correct escape sequences, but the terminal firmware was mashing up the results because it truncated sequences if they were too long. This is typical of the computer 'bug' which fails workable programs because the program is 'too long and complicated' for the computer software to handle. This caused me to completely lose confidence in 'termcaps' style systems, and to handle screen IO via 'kitchen table' code written for each terminal, as required. Twenty five years later this 'kitchen table' stuff seems to work even better than in 1984. Back in 1984 a terminal with firmware, a screen and a keyboard cost over $1000. There were many different terminal specifications and making the same program work on all of them required a thorough understanding of termcaps. Nowdays these terminals still exist via terminal emulation and you can have six or seven running on a computer Desktop. Surprisingly development is still going on for these old fashioned terminals, and they keep on improving even on Microsoft platforms. The big impetus seems to be China, Japan and Korea or 'CJK'. The Unicode enabled 'Terminal' program on which this document is written still supports Digital Equipment VT-100 style escape sequences and I can swap colors to please my aging eyes via the same control code sequences that I learned about in 1984. The 'Linux Console' uses escape sequences. Along with UNIX and Linux came the .xpm image format used by X-windows. XPM is essentially ASCII graphics but it is supported by easily obtained programs which will convert .XPM style images to .gif or .jpg formats. Mathematics serves computers rather better than computers serve mathematics. The .jpg algorithm relies on quite sophisticated mathematics, and yet few owners of digital cameras can describe a Discrete Fourier Transform. Those people who get the 'Number Unavailable' logo on their mobile phone are unlikely to realise that a Gram-Schmit Orthogonalisation Process has failed because a matrix has become singular because it overspecifies a number of equations (one for each channel in the cell). Skills acquired through the study of mathematics tend to be more 'future proof' than other skills. That does not mean that these skills always lead people to the right conclusions. In the early 1800's many mathematicians thought they could glimpse a method of proving Fermat's Last Theorem, via unique factorisation of numbers until someone pointed out that 'unique factorisation' could not be generalised. This page is being rebuilt. The windows stuff has not been compiled since 2004, but I had it running in a Beijing cybercafe in 2008. It seems to work OK in a chinese enabled command window. The programs are in dna.exe. The use of the caret '^' for power, such as x^2 for x squared may seem to be somewhat 'tacky' or 'naff', but the search engines will find expressions entered this way. Try searching for x^2+877 next time you use a search engine.ULAM NUMBERS
f(n) = n/2 if n is even.
f(n) = 3n+1 if n is odd.
DYNAMICAL SYSTEMS
A NEW KIND OF SCIENCE
X[n+1,i] = KEY[S[i]]
REVISION NOTES
CREDITS
J. Swinnerton Dyer. A Galois Theory course.
Dr. Garling Measure Theory.
J.H.Conway For inventing 'Life'.
James Wallbank. Website hosting.
LINKS
REFERENCES
Hardy & Wright. pp19,22.
[2] n=p1+p2(=$1m).
David Ward.
The Guardian, 18 March 2000
[3] Ada and the First Computer
Eugene Eric Kim & Betty Alexandra Toole,
Scientific American, May 1999.
[4] Plagues. Their Origin, history and future.
Christopher Wills.
Harper Collins 1996.
[5] Prime Time
Erica Klarreich
New Scientist, 11 Nov 2000 #2264.
[6] An Essay on the Principle of Population
Malthus. Various editions from 1798-1830
(C) Tony Goddard, Sheffield 2011
+44(0)7944 764312
back to the top