A few days ago
Anonymous

10 points to first correct answer?

please help, when was the 32 bit made 000year…

also, please provide a source, and please answer!!!

Top 4 Answers
A few days ago
Anonymous

Favorite Answer

I don’t know why i am helping you with your homework, you must have caught me on a good day, click on the link below and don’t tell your teacher.
0

A few days ago
Katie V
Is this what you’re looking for? . . .

The Year-2038 Bug

A web site devoted to disseminating information about the year-2038 bug

——————————————————————————–

The year-2038 bug is similar to the Y2K bug in that it involves a time-wrap problem not handled by programmers. In the case of Y2K, many older machines did not store the century digits of dates, hence the year 2000 and the year 1900 would appear the same.

Of course we now know that the prevalence of computers that would fail because of this error was greatly exaggerated by the media. Computer scientists were generally aware that most machines would continue operating as usual through the century turnover, with the worst result being an incorrect date. This prediction withstood through to the new millennium. Effected systems were tested and corrected in time, although the correction and verification of those systems was monumentally expensive.

There are however several other problems with date handling on machines in the world today. Some are less prevalent than others, but it is true that almost all computers suffer from one critical limitation. Most programs work our their dates from a perpetual second counter – 86400 seconds per day counting from Jan 1 1970. A recent milestone was Sep 9 2001, where this value wrapped from 999’999’999 seconds to 1’000’000’000 seconds. Very few programs anywhere store time as a 9 digit number, and therefore this was not a problem.

Modern computers use a standard 4 byte integer for this second count. This is 31 bits, storing a value of 231. The remaining bit is the sign. This means that when the second count reaches 2147483647, it will wrap to -2147483648.

The precise date of this occurrence is Tue Jan 19 03:14:07 2038. At this time, a machine prone to this bug will show the time Fri Dec 13 20:45:52 1901, hence it is possible that the media will call this The Friday 13th Bug.

See the FAQ

Update: 01/2004 The first 2038 problems are already here. Many 32-bit programs calculate time averages using (t1 + t2)/2. It should be quite obvious that this calculation fails when the time values pass 30 bits. The exact day can be calculated by making a small Unix C program, as follows:

echo ‘long q=(1UL<<30);int main(){return puts(asctime(localtime(&q)));};' \ > x.c && cc x.c &&./a.out

In other words, on the 10th of January 2004 the occasional system will perform an incorrect time calculation until its code is corrected. Thanks to Ray Boucher for this observation.

The temporary solution is to replace all (t1 + t2)/2 with (((long long) t1 + t2) / 2) (POSIX/SuS) or (((double) t1 + t2) / 2) (ANSI).

Alternatively avoid casts and use: (t1/2 + t2/2 + (t1&t2&1)).

0

A few days ago
Trey S
Mid 70’s. Was developed by VAX. But 1976 is your answer. I need the 10 points. 🙂

In 1976, DEC (Digital Electric Corp.) decided to extend the PDP-11 architecture to 32 bits, creating the first 32-bit minicomputer, referred to as a super-mini.

1985 – Development of the 80386… Although they knew before this that 32 bit was theoretically possible, the 80386 which lead to the Pentium III was the first to do the job. Vax however developed the architecture.

Sources below. Hope this helps you out

0

A few days ago
florianson
the same year that 16th bit was made.
0