10 points to first correct answer?
also, please provide a source, and please answer!!!
Favorite Answer
The Year-2038 Bug
A web site devoted to disseminating information about the year-2038 bug
——————————————————————————–
The year-2038 bug is similar to the Y2K bug in that it involves a time-wrap problem not handled by programmers. In the case of Y2K, many older machines did not store the century digits of dates, hence the year 2000 and the year 1900 would appear the same.
Of course we now know that the prevalence of computers that would fail because of this error was greatly exaggerated by the media. Computer scientists were generally aware that most machines would continue operating as usual through the century turnover, with the worst result being an incorrect date. This prediction withstood through to the new millennium. Effected systems were tested and corrected in time, although the correction and verification of those systems was monumentally expensive.
There are however several other problems with date handling on machines in the world today. Some are less prevalent than others, but it is true that almost all computers suffer from one critical limitation. Most programs work our their dates from a perpetual second counter – 86400 seconds per day counting from Jan 1 1970. A recent milestone was Sep 9 2001, where this value wrapped from 999’999’999 seconds to 1’000’000’000 seconds. Very few programs anywhere store time as a 9 digit number, and therefore this was not a problem.
Modern computers use a standard 4 byte integer for this second count. This is 31 bits, storing a value of 231. The remaining bit is the sign. This means that when the second count reaches 2147483647, it will wrap to -2147483648.
The precise date of this occurrence is Tue Jan 19 03:14:07 2038. At this time, a machine prone to this bug will show the time Fri Dec 13 20:45:52 1901, hence it is possible that the media will call this The Friday 13th Bug.
See the FAQ
Update: 01/2004 The first 2038 problems are already here. Many 32-bit programs calculate time averages using (t1 + t2)/2. It should be quite obvious that this calculation fails when the time values pass 30 bits. The exact day can be calculated by making a small Unix C program, as follows:
echo ‘long q=(1UL<<30);int main(){return puts(asctime(localtime(&q)));};' \ > x.c && cc x.c &&./a.out
In other words, on the 10th of January 2004 the occasional system will perform an incorrect time calculation until its code is corrected. Thanks to Ray Boucher for this observation.
The temporary solution is to replace all (t1 + t2)/2 with (((long long) t1 + t2) / 2) (POSIX/SuS) or (((double) t1 + t2) / 2) (ANSI).
Alternatively avoid casts and use: (t1/2 + t2/2 + (t1&t2&1)).
In 1976, DEC (Digital Electric Corp.) decided to extend the PDP-11 architecture to 32 bits, creating the first 32-bit minicomputer, referred to as a super-mini.
1985 – Development of the 80386… Although they knew before this that 32 bit was theoretically possible, the 80386 which lead to the Pentium III was the first to do the job. Vax however developed the architecture.
Sources below. Hope this helps you out
- Academic Writing
- Accounting
- Anthropology
- Article
- Blog
- Business
- Career
- Case Study
- Critical Thinking
- Culture
- Dissertation
- Education
- Education Questions
- Essay Tips
- Essay Writing
- Finance
- Free Essay Samples
- Free Essay Templates
- Free Essay Topics
- Health
- History
- Human Resources
- Law
- Literature
- Management
- Marketing
- Nursing
- other
- Politics
- Problem Solving
- Psychology
- Report
- Research Paper
- Review Writing
- Social Issues
- Speech Writing
- Term Paper
- Thesis Writing
- Writing Styles