What does 8-bit / 16-bit actually refer to?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







93















When talking about retro games, terms like "8-bit music" or "16-bit graphics" often come up. I myself often use these terms, but I'm not exactly sure what they refer to. What do they mean?










share|improve this question























  • I realize that the examples I have given are two different contexts; I'd like to know both. :)

    – Kevin Yap
    Sep 25 '10 at 22:37






  • 68





    It's funny that your 8 bit question is Arqade post number 8008, honouring the legendary 8 bit Intel 8008 processor

    – Zommuter
    Sep 17 '11 at 10:43








  • 7





    @ProSay that would be awesome, but the question is almost 1 year old

    – Zommuter
    Sep 17 '11 at 10:55






  • 1





    oh damm i thought it was 5 minutes old some guy went to bump it

    – ProSay
    Sep 17 '11 at 12:22








  • 1





    @Pro I'm that some guy... :p

    – Zommuter
    Sep 21 '11 at 19:44


















93















When talking about retro games, terms like "8-bit music" or "16-bit graphics" often come up. I myself often use these terms, but I'm not exactly sure what they refer to. What do they mean?










share|improve this question























  • I realize that the examples I have given are two different contexts; I'd like to know both. :)

    – Kevin Yap
    Sep 25 '10 at 22:37






  • 68





    It's funny that your 8 bit question is Arqade post number 8008, honouring the legendary 8 bit Intel 8008 processor

    – Zommuter
    Sep 17 '11 at 10:43








  • 7





    @ProSay that would be awesome, but the question is almost 1 year old

    – Zommuter
    Sep 17 '11 at 10:55






  • 1





    oh damm i thought it was 5 minutes old some guy went to bump it

    – ProSay
    Sep 17 '11 at 12:22








  • 1





    @Pro I'm that some guy... :p

    – Zommuter
    Sep 21 '11 at 19:44














93












93








93


17






When talking about retro games, terms like "8-bit music" or "16-bit graphics" often come up. I myself often use these terms, but I'm not exactly sure what they refer to. What do they mean?










share|improve this question














When talking about retro games, terms like "8-bit music" or "16-bit graphics" often come up. I myself often use these terms, but I'm not exactly sure what they refer to. What do they mean?







terminology






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Sep 25 '10 at 22:32









Kevin YapKevin Yap

31.1k15134180




31.1k15134180













  • I realize that the examples I have given are two different contexts; I'd like to know both. :)

    – Kevin Yap
    Sep 25 '10 at 22:37






  • 68





    It's funny that your 8 bit question is Arqade post number 8008, honouring the legendary 8 bit Intel 8008 processor

    – Zommuter
    Sep 17 '11 at 10:43








  • 7





    @ProSay that would be awesome, but the question is almost 1 year old

    – Zommuter
    Sep 17 '11 at 10:55






  • 1





    oh damm i thought it was 5 minutes old some guy went to bump it

    – ProSay
    Sep 17 '11 at 12:22








  • 1





    @Pro I'm that some guy... :p

    – Zommuter
    Sep 21 '11 at 19:44



















  • I realize that the examples I have given are two different contexts; I'd like to know both. :)

    – Kevin Yap
    Sep 25 '10 at 22:37






  • 68





    It's funny that your 8 bit question is Arqade post number 8008, honouring the legendary 8 bit Intel 8008 processor

    – Zommuter
    Sep 17 '11 at 10:43








  • 7





    @ProSay that would be awesome, but the question is almost 1 year old

    – Zommuter
    Sep 17 '11 at 10:55






  • 1





    oh damm i thought it was 5 minutes old some guy went to bump it

    – ProSay
    Sep 17 '11 at 12:22








  • 1





    @Pro I'm that some guy... :p

    – Zommuter
    Sep 21 '11 at 19:44

















I realize that the examples I have given are two different contexts; I'd like to know both. :)

– Kevin Yap
Sep 25 '10 at 22:37





I realize that the examples I have given are two different contexts; I'd like to know both. :)

– Kevin Yap
Sep 25 '10 at 22:37




68




68





It's funny that your 8 bit question is Arqade post number 8008, honouring the legendary 8 bit Intel 8008 processor

– Zommuter
Sep 17 '11 at 10:43







It's funny that your 8 bit question is Arqade post number 8008, honouring the legendary 8 bit Intel 8008 processor

– Zommuter
Sep 17 '11 at 10:43






7




7





@ProSay that would be awesome, but the question is almost 1 year old

– Zommuter
Sep 17 '11 at 10:55





@ProSay that would be awesome, but the question is almost 1 year old

– Zommuter
Sep 17 '11 at 10:55




1




1





oh damm i thought it was 5 minutes old some guy went to bump it

– ProSay
Sep 17 '11 at 12:22







oh damm i thought it was 5 minutes old some guy went to bump it

– ProSay
Sep 17 '11 at 12:22






1




1





@Pro I'm that some guy... :p

– Zommuter
Sep 21 '11 at 19:44





@Pro I'm that some guy... :p

– Zommuter
Sep 21 '11 at 19:44










7 Answers
7






active

oldest

votes


















68














8-bit and 16-bit, for video games, specifically refers to the processors used in the console. The number references the size of the words of data used by each processor. The 8-bit generation of consoles (starting with Nintendo's Famicom, also called Nintendo Entertainment System) used 8-bit processors; the 16-bit generation (starting with NEC/Hudson's PC Engine, also called TurboGrafx-16) used a 16-bit graphics processor. This affects the quality and variety in the graphics and the music by affecting how much data can be used at once; Oak's answer details the specifics of graphics.



If you don't know about a computer bit, then here is the Wikipedia article on bits: http://en.wikipedia.org/wiki/Bit, which I'll quote the first sentence that is all one really needs to know.




A bit or binary digit is the basic unit of information in computing and telecommunications; it is the amount of information that can be stored by a digital device or other physical system that can usually exist in only two distinct states.




Now, note that in modern times, things like "8-bit music" and "16-bit graphics" don't necessarily have anything to do with processors or data size, as most machinery doesn't run that small anymore. They may instead refer specifically to the style of music or graphics used in games during those generations, done as a homage to nostalgia. 8-bit music is the standard chiptune fare; the graphics were simplistic in terms of colour. 16-bit music is higher quality but often still has a distinct electronic feel, while the graphics got much more complex but still largely 2-dimensional and 240p resolution.






share|improve this answer


























  • An example of such "intentional retro": megaman.capcom.com/10

    – Raven Dreamer
    Sep 25 '10 at 22:49








  • 3





    To give an idea of where we stand today, gaming consoles have used 64-bit processors since the Atari Jaguar and Nintendo 64. The XBox 360 sports 3 64-bit processors. 64-bit PC processors are finally popular (you will see Windows Seven 64-bit version, for example).

    – Wikwocket
    Sep 26 '10 at 2:00






  • 7





    Specifically, it's the size of the accumulator register. However, don't rely on this number to tell you much - 90% of programs will see little-to-no benefit jumping from a 32- to a 64-bit processor. The exceptions are programs which must do complex calculations on large sets of data, such as video encoding.

    – BlueRaja - Danny Pflughoeft
    Sep 26 '10 at 4:18






  • 2





    Yet another amazing answer by Grace Note. :)

    – Kevin Yap
    Sep 26 '10 at 5:43






  • 9





    @kirk For one reason or another, the "8-bit generation" doesn't start with the intro of 8-bit processors. It probably was a retro-active name: the 16-bit generation was defined by it, but the previous generation heralded by the Famicom was typically considered separate from the Atari 5200 generation. So in the 16-bit generation was named the 16-bit, they simply called the previous one 8-bit at the cost of accuracy.

    – Grace Note
    Nov 8 '10 at 1:07



















26














8-bit, 16-bit, 32-bit and 64-bit all refer to a processor's word size. A "word" in processor parlance means the native size of information it can place into a register and process without special instructions. It also refers to the size of the memory address space. The word size of any chip is the most defining aspect of it's design. There are several reasons why it is so important:




  • First off, the maximum value you can hold. An 8-bit integer can hold a value up to 255. A 16-bit int can be up to 65,535.

  • Memory addressing: With bigger numbers, you can track more address space (a gross oversimplification, but it holds true).

  • Double-words and quad-words. There are cases when you want to use a larger word for a variable. A double word is just 2 words, so a 32-bit variable on a 16-bit machine or a 16-bit variable on an 8-bit machine.

  • Instructions. Again, with a larger number you can have more opcodes (the actual machine instructions). Even though adding 2 integers looks simple, on the hardware level even that is quite complicated. For instance a machine may have separate MOV instructions for loading a nibble (half-byte), byte, word, double word or quad word into a register. From there you would need to add it to another register or add from a variable in memory, and that's another set of possible instructions. Floating point instructions are also a completely separate set of instructions.


    • Aside from not having the memory, an 8-bit machine usually has a separate processor for handling floating point math on the hardware. 16-bit machines usually have an integrated floating point unit to handle that.

    • With a larger word size you can put in more specialized instructions, like specialized direct hardware access, built-in functions (hardware graphics processing for example), hardware memory management, etc.



  • Memory management: With a bigger word comes the possibility of being able to address more memory. Many 8 and 16-bit machines used a variety of schemes to be able to address as much memory as possible, often exceeding the limitations of their word size. Your typical 32 & 64-bit personal computer CPUs use memory registers that are equal to their word size giving them access to 4,294,967,296 and 18,446,744,073,709,551,616 bytes, respectively.


TL;DR



The difference in word size has a dramatic impact on the capabilities and performance of a given chip. Once you get up to 32-bits, the differences mainly become those of refinement (unless you are running a really big application, like genetic analysis or counting all the stars in the galaxy big).



I hope this ramble of an answer is of some help.



Further Reading




  • https://en.wikipedia.org/wiki/Word_(computer_architecture)






share|improve this answer





















  • 4





    Regarding the first bullet point: An 8-bit integer can hold values up to 255 if it is unsigned. Same for 16-bit, except not 65,536, but 65,535. It's 65,536 different values (including zero, so 65,535 is the max).

    – Victor Zamanian
    Oct 25 '12 at 22:31











  • @VictorZamanian I always get the 16-bit int max off by 1. I'm also not going to get into signed versus unsigned in this statement.

    – CyberSkull
    Nov 1 '12 at 16:28








  • 1





    This, incidentally, is why you can only get as many as 255 rupees in Legend of Zelda, and why 255/254 is often the (seemingly random) hard cap on values in 8-bit games.

    – Zibbobz
    Mar 13 '14 at 16:58






  • 1





    @ZibbobzL Several games max out at 65535 money.

    – Mooing Duck
    Oct 20 '14 at 16:47



















18














The term "8-bit graphics" literally means that every pixel uses 8 bits for storing the color value - so only 256 options. Modern systems use 8 bits to store each color channel, so every pixel typically uses 24 bits.



There's nothing preventing modern games from limiting themselves to a stricter, 8-bit color palette; but the term is often used to describe old games in which using 8 bits per pixel was necessary.






share|improve this answer



















  • 4





    Specifically, 8-bit color yields a 256-color palette, 16-bit color would be 64k colors, and the modern 24-bit palette supports 16 million colors.

    – Wikwocket
    Sep 26 '10 at 1:55






  • 3





    @Wikwocket: Sometimes, you'll hear reference to 32-bit graphics, which are just 24-bit graphics with an 8-bit transparency setting.

    – user2974
    Sep 26 '10 at 16:11






  • 7





    But the kind of graphics generally called "8-bit" are those associated with games of the NES era, where each tile used only four (or three+transparent for sprites) colors selected from a larger palette - the graphics themselves weren't "8-bit" in any sense.

    – Random832
    May 30 '11 at 17:52






  • 2





    @Random832 well those games may have been called 8-bit, but my answer explains what 8-bit graphics literally means. Maybe those games were called 8-bit on account of the processor, as Grace Note mentioned in the other answer.

    – Oak
    May 30 '11 at 18:49






  • 2





    The SNES could have up to 256 colors on screen out of a palette of 65k colors, the NES had something like 16-24 colors max on screen.

    – CyberSkull
    Sep 20 '11 at 14:54



















6














Way back in the day the bit size of a CPU was a reference to how wide the processors registers where. A CPU typically has several registers in which you can move data around and do operations on it. For example add 2 numbers together then store the results in another register. In the 8 bit era the registers were 8 bits wide and of you had a big number like 4000 it wouldn't fit in a single register so your would have to do two operations to simulate a 16 bit operation. For example if you have got 10,000 gold coins you would need to use to add instructions to add them together. One to handle the lower 8bits and another to add the upper 8bits(With carrying taken into account). Where as a 16bit system could have just done it in one operation. You may remember in the legend of Zelda you would max out at 255 rupees as its the largest unsigned 8bit number possible.



Nowadays registers in a CPU come in all different sizes so this isn't really good measure anymore. For example the SSE Registers in the amd64 processors of today are 256 bits wide(For real) but the processors are still considered 64 bit. Lately these days most people are thinking of the addressing size the CPU is capable of supporting. It seems the bit size of a machine is really based on the current trends of hardware of the time. But for me I still consider the size of a native integer register which seems correct even today and still matches the addressing size of the CPU as well. Which makes since since the native integer size of a register is typically the same size as a memory pointer.






share|improve this answer

































    6














    In addition to Oak's answer, the 8 bits for graphic not only limit1 the color palette, but also the screen resolution to a maximum of 256 in each direction (e.g. the NES has 256x240 pixels of which 256x224 are typically visible). For sprite graphics you need to split these 8 bit, e.g. to obtain 32 = 2⁵ different x-positions and 16 = 2⁴ different y-positions, you have 8x16 (2³x2⁴) pixels left for a sprite's resolution. That is why you get that typical pixel look.



    The same applies for music, 8 bit means a maximum of 256 levels of your sound output level (per sample, the temporal resolution is another issue), which is too coarse to provide sounds that do not sound Chiptune (or noisy, if still trying PCM sound) to the human ear. 16 bits per sample is what the CD standard uses, by the way. But 16 bit music more refers to Tracker music, whose limits are similar to those of popular game consoles with a 16 bit processor.



    Another interesting point is that an 8 bit input device is limited1 to 8 boolean button states split up into the four directions of the D-pad plus four buttons. Or a 2 button joystick with 3 bits (a mere 8 levels, including the sign!) remaining for both the x- and y-axis.



    So, for originally old games, 8 bit / 16 bit might be considered referring to the system's capabilities (but consider Grace's point about the incosistency in the label "8 bit"). For a retro game, consider the question whether it would be theoretically possible to obey the mentioned constraints (neglecting shader effects like Bloom), although you might have to allow some "cheating" - I'd consider a sprite based game using 8x16 squares sprites still 8 bit even if sprites could be floating at any position in HD resolution and the squares were 16x16 pixels each...





    1) well obviously you can use 2 times 8 bit to circumvent that limit, but as BlueRaja points out in a comment on Grace's answer, considering the accumulator register to be 8 bit only as well, that would cause a performance loss. Also, it would be cheating your way to 16 bit IMHO






    share|improve this answer

































      0














      Despite all the interesting technical discussions provided by other contributors, the 8-bit and 16-bit descriptors for gaming consoles don't mean anything consistently. Effectively, 16-bit is only meaningful as a marketing term.



      Briefly, in word size:




      • The Super Nintendo uses the RA55 CPU which has 16 bit index registers, and opcodes which can process 16 bit numbers into a 16 bit accumulator, but it doesn't have the 16 bit register values we might associate with a typical 16-bit processor. I suppose this is a 16-bit word size in 650x terms, but it's a strange terminology to me. I might rather say the RA55 instruction set supports 16-bit value operations. The 68c816 documentation does not in any location define words as any particular size.

      • The Turbo Grafx 16 doesn't have native 16 bit operations, nor a 16 bit accumulator to store them in. Like the Super Nintendo, this is a 650x family CPU, but this one only supports 8 bit operations and has only 8-bit registers. If it has a word size, it is 8-bit.

      • The Genesis/Mega Drive with the Motorola 68000 offers 32 bit word sizes (with 32 bit registers, and 32 bit operations) but was marketed with "16-bit" in the molded plastic. As a relatively new 32-bit cpu, and due to historical patterns, the 68k family names a 16-bit value a "word", but has full native support for nearly all operations with 32-bit values named "long". This represents the beginning of the era when "word size" had become a legacy concept. Previously, there were architectures with things like 9 bit words, or 11 bit words. From here on, word size becomes most commonly "two 8-bit bytes".


      In addressing space:



      Most 8-bit consoles had 16-bit physical addressing space (256 bytes wouldn't get you very far.) They used segmenting schemes but so did the Turbo Grafx 16. The Genesis had a cpu capable of 32-bit addressing.



      In data bus:



      The Turbo Grafx 16 and the Super Nintendo had an 8 bit data bus. The Genesis/Mega Drive had a 16 bit data bus.



      In color Depth:



      Total possible color palette is owned by the graphics circuitry and however the palette table is expressed is for its needs. You wouldn't expect this to have much correlation across systems, and it doesn't.




      • The Super Nintendo had 15 bit palette space, and 8 bits of space to select colors out of the that space.

      • The Genesis had a 9 bit palette space, with essentially 6 bits of space to select colors out of that space.

      • The Turbo Grafx 16 also had a 9 bit palette space with a complicated scheme of many simultaneous palettes all of which were 4 bit.


      This doesn't fully describe graphics capabilities of the systems even in terms of colors, which had other features like special layer features, or specifics of their sprite implementations or other details. However, it does accurately portray the bit depth of the major features.



      So you can see there are many features of systems which can be measured in bit-size which don't have a requirement to agree, and there is no particular grouping around any feature that is 16-bit for consoles grouped this way. Moreover, there is no reason to expect that consumers would care at all about word size or data paths. You can see that systems with "small" values here were regardless very capable gaming platforms for the time.



      Essentially "16-bit" is just a generation of consoles which received certain marketing in a certain time period. You can find a lot more commonality between them in terms of overall graphics capability than you can in terms of any specific bitness, and that makes sense because graphics innovation (at a low cost) was the main goal of these designs.



      "8-bit" was a retroactive identification for the previous consoles. In the US this was a the dominant Nintendo Entertainment System and the less present Sega Master System. Does it apply to an Atari 7800? A 5200? An Intellivision? An atari 2600 or colecovision or an Odyssey 2? Again, there is no bitness boundary that is clear among these consoles. By convention, it probably only includes the consoles introduced from around 1984 to 1988 or so, but this is essentially a term we apply now that was not used then and refers to no particular set of consoles, except by convention.






      share|improve this answer

































        -8














        When talking about the the retro gaming 8bit, 16bit, and 64bit. It simply means the amount of pixels used to create the images for example the NES and Sega Mega Drive are very blocky and has large pixels 8bit the SNES and Sega Genesis improve this to "16 bit" and the N64 masters this concept to 64-bit and so on to 128 to 256 and eventually to 1080 HD. Even though it is and was slightly out of context.



        Nintendo power in the early 90s actually created these "terms" when they released articles about how Nintendo 8bit power was so much better then Sega. Each to their own but anyways they did this because 99% of the people would have no clue what they were actually talking about.






        share|improve this answer





















        • 3





          Pixels are nowhere near what 8/16/32/64 or higher bit mean at all.

          – Frank
          Feb 16 '15 at 5:03






        • 3





          At best they refer to colour palette/depth per pixel; definitely not pixel size. Unless you mean internal storage size, but if so you might want to spend more time on that part explaining just what you mean.

          – SevenSidedDie
          Feb 16 '15 at 5:07











        • My old 8-bit Bally system (which had handgun grips for controllers) had a 'pong' game that used more than 8 pixels per paddle. This is positively out in left field.

          – Tim Post
          Feb 16 '15 at 5:09











        • The bit count of each generation is certainly tied to the graphical fidelity that could be produced. But but it's by no means what the bit count "means".

          – DJ Pirtu
          Feb 16 '15 at 7:46











        • @SevenSidedDie, the NES was approximately 4.6-bit indexed color; clever programming could get 5.75-bit indexed color. The SNES is difficult to quantify, but appears to be 8.6 bit indexed color or 11-bit direct color. The Nintendo 64 and onward use straightforward 24-bit direct color.

          – Mark
          Sep 24 '15 at 22:23












        Your Answer








        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "41"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });














        draft saved

        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgaming.stackexchange.com%2fquestions%2f8008%2fwhat-does-8-bit-16-bit-actually-refer-to%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown

























        7 Answers
        7






        active

        oldest

        votes








        7 Answers
        7






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        68














        8-bit and 16-bit, for video games, specifically refers to the processors used in the console. The number references the size of the words of data used by each processor. The 8-bit generation of consoles (starting with Nintendo's Famicom, also called Nintendo Entertainment System) used 8-bit processors; the 16-bit generation (starting with NEC/Hudson's PC Engine, also called TurboGrafx-16) used a 16-bit graphics processor. This affects the quality and variety in the graphics and the music by affecting how much data can be used at once; Oak's answer details the specifics of graphics.



        If you don't know about a computer bit, then here is the Wikipedia article on bits: http://en.wikipedia.org/wiki/Bit, which I'll quote the first sentence that is all one really needs to know.




        A bit or binary digit is the basic unit of information in computing and telecommunications; it is the amount of information that can be stored by a digital device or other physical system that can usually exist in only two distinct states.




        Now, note that in modern times, things like "8-bit music" and "16-bit graphics" don't necessarily have anything to do with processors or data size, as most machinery doesn't run that small anymore. They may instead refer specifically to the style of music or graphics used in games during those generations, done as a homage to nostalgia. 8-bit music is the standard chiptune fare; the graphics were simplistic in terms of colour. 16-bit music is higher quality but often still has a distinct electronic feel, while the graphics got much more complex but still largely 2-dimensional and 240p resolution.






        share|improve this answer


























        • An example of such "intentional retro": megaman.capcom.com/10

          – Raven Dreamer
          Sep 25 '10 at 22:49








        • 3





          To give an idea of where we stand today, gaming consoles have used 64-bit processors since the Atari Jaguar and Nintendo 64. The XBox 360 sports 3 64-bit processors. 64-bit PC processors are finally popular (you will see Windows Seven 64-bit version, for example).

          – Wikwocket
          Sep 26 '10 at 2:00






        • 7





          Specifically, it's the size of the accumulator register. However, don't rely on this number to tell you much - 90% of programs will see little-to-no benefit jumping from a 32- to a 64-bit processor. The exceptions are programs which must do complex calculations on large sets of data, such as video encoding.

          – BlueRaja - Danny Pflughoeft
          Sep 26 '10 at 4:18






        • 2





          Yet another amazing answer by Grace Note. :)

          – Kevin Yap
          Sep 26 '10 at 5:43






        • 9





          @kirk For one reason or another, the "8-bit generation" doesn't start with the intro of 8-bit processors. It probably was a retro-active name: the 16-bit generation was defined by it, but the previous generation heralded by the Famicom was typically considered separate from the Atari 5200 generation. So in the 16-bit generation was named the 16-bit, they simply called the previous one 8-bit at the cost of accuracy.

          – Grace Note
          Nov 8 '10 at 1:07
















        68














        8-bit and 16-bit, for video games, specifically refers to the processors used in the console. The number references the size of the words of data used by each processor. The 8-bit generation of consoles (starting with Nintendo's Famicom, also called Nintendo Entertainment System) used 8-bit processors; the 16-bit generation (starting with NEC/Hudson's PC Engine, also called TurboGrafx-16) used a 16-bit graphics processor. This affects the quality and variety in the graphics and the music by affecting how much data can be used at once; Oak's answer details the specifics of graphics.



        If you don't know about a computer bit, then here is the Wikipedia article on bits: http://en.wikipedia.org/wiki/Bit, which I'll quote the first sentence that is all one really needs to know.




        A bit or binary digit is the basic unit of information in computing and telecommunications; it is the amount of information that can be stored by a digital device or other physical system that can usually exist in only two distinct states.




        Now, note that in modern times, things like "8-bit music" and "16-bit graphics" don't necessarily have anything to do with processors or data size, as most machinery doesn't run that small anymore. They may instead refer specifically to the style of music or graphics used in games during those generations, done as a homage to nostalgia. 8-bit music is the standard chiptune fare; the graphics were simplistic in terms of colour. 16-bit music is higher quality but often still has a distinct electronic feel, while the graphics got much more complex but still largely 2-dimensional and 240p resolution.






        share|improve this answer


























        • An example of such "intentional retro": megaman.capcom.com/10

          – Raven Dreamer
          Sep 25 '10 at 22:49








        • 3





          To give an idea of where we stand today, gaming consoles have used 64-bit processors since the Atari Jaguar and Nintendo 64. The XBox 360 sports 3 64-bit processors. 64-bit PC processors are finally popular (you will see Windows Seven 64-bit version, for example).

          – Wikwocket
          Sep 26 '10 at 2:00






        • 7





          Specifically, it's the size of the accumulator register. However, don't rely on this number to tell you much - 90% of programs will see little-to-no benefit jumping from a 32- to a 64-bit processor. The exceptions are programs which must do complex calculations on large sets of data, such as video encoding.

          – BlueRaja - Danny Pflughoeft
          Sep 26 '10 at 4:18






        • 2





          Yet another amazing answer by Grace Note. :)

          – Kevin Yap
          Sep 26 '10 at 5:43






        • 9





          @kirk For one reason or another, the "8-bit generation" doesn't start with the intro of 8-bit processors. It probably was a retro-active name: the 16-bit generation was defined by it, but the previous generation heralded by the Famicom was typically considered separate from the Atari 5200 generation. So in the 16-bit generation was named the 16-bit, they simply called the previous one 8-bit at the cost of accuracy.

          – Grace Note
          Nov 8 '10 at 1:07














        68












        68








        68







        8-bit and 16-bit, for video games, specifically refers to the processors used in the console. The number references the size of the words of data used by each processor. The 8-bit generation of consoles (starting with Nintendo's Famicom, also called Nintendo Entertainment System) used 8-bit processors; the 16-bit generation (starting with NEC/Hudson's PC Engine, also called TurboGrafx-16) used a 16-bit graphics processor. This affects the quality and variety in the graphics and the music by affecting how much data can be used at once; Oak's answer details the specifics of graphics.



        If you don't know about a computer bit, then here is the Wikipedia article on bits: http://en.wikipedia.org/wiki/Bit, which I'll quote the first sentence that is all one really needs to know.




        A bit or binary digit is the basic unit of information in computing and telecommunications; it is the amount of information that can be stored by a digital device or other physical system that can usually exist in only two distinct states.




        Now, note that in modern times, things like "8-bit music" and "16-bit graphics" don't necessarily have anything to do with processors or data size, as most machinery doesn't run that small anymore. They may instead refer specifically to the style of music or graphics used in games during those generations, done as a homage to nostalgia. 8-bit music is the standard chiptune fare; the graphics were simplistic in terms of colour. 16-bit music is higher quality but often still has a distinct electronic feel, while the graphics got much more complex but still largely 2-dimensional and 240p resolution.






        share|improve this answer















        8-bit and 16-bit, for video games, specifically refers to the processors used in the console. The number references the size of the words of data used by each processor. The 8-bit generation of consoles (starting with Nintendo's Famicom, also called Nintendo Entertainment System) used 8-bit processors; the 16-bit generation (starting with NEC/Hudson's PC Engine, also called TurboGrafx-16) used a 16-bit graphics processor. This affects the quality and variety in the graphics and the music by affecting how much data can be used at once; Oak's answer details the specifics of graphics.



        If you don't know about a computer bit, then here is the Wikipedia article on bits: http://en.wikipedia.org/wiki/Bit, which I'll quote the first sentence that is all one really needs to know.




        A bit or binary digit is the basic unit of information in computing and telecommunications; it is the amount of information that can be stored by a digital device or other physical system that can usually exist in only two distinct states.




        Now, note that in modern times, things like "8-bit music" and "16-bit graphics" don't necessarily have anything to do with processors or data size, as most machinery doesn't run that small anymore. They may instead refer specifically to the style of music or graphics used in games during those generations, done as a homage to nostalgia. 8-bit music is the standard chiptune fare; the graphics were simplistic in terms of colour. 16-bit music is higher quality but often still has a distinct electronic feel, while the graphics got much more complex but still largely 2-dimensional and 240p resolution.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Apr 13 '17 at 12:09









        Community

        1




        1










        answered Sep 25 '10 at 22:43









        Grace NoteGrace Note

        23.1k382107




        23.1k382107













        • An example of such "intentional retro": megaman.capcom.com/10

          – Raven Dreamer
          Sep 25 '10 at 22:49








        • 3





          To give an idea of where we stand today, gaming consoles have used 64-bit processors since the Atari Jaguar and Nintendo 64. The XBox 360 sports 3 64-bit processors. 64-bit PC processors are finally popular (you will see Windows Seven 64-bit version, for example).

          – Wikwocket
          Sep 26 '10 at 2:00






        • 7





          Specifically, it's the size of the accumulator register. However, don't rely on this number to tell you much - 90% of programs will see little-to-no benefit jumping from a 32- to a 64-bit processor. The exceptions are programs which must do complex calculations on large sets of data, such as video encoding.

          – BlueRaja - Danny Pflughoeft
          Sep 26 '10 at 4:18






        • 2





          Yet another amazing answer by Grace Note. :)

          – Kevin Yap
          Sep 26 '10 at 5:43






        • 9





          @kirk For one reason or another, the "8-bit generation" doesn't start with the intro of 8-bit processors. It probably was a retro-active name: the 16-bit generation was defined by it, but the previous generation heralded by the Famicom was typically considered separate from the Atari 5200 generation. So in the 16-bit generation was named the 16-bit, they simply called the previous one 8-bit at the cost of accuracy.

          – Grace Note
          Nov 8 '10 at 1:07



















        • An example of such "intentional retro": megaman.capcom.com/10

          – Raven Dreamer
          Sep 25 '10 at 22:49








        • 3





          To give an idea of where we stand today, gaming consoles have used 64-bit processors since the Atari Jaguar and Nintendo 64. The XBox 360 sports 3 64-bit processors. 64-bit PC processors are finally popular (you will see Windows Seven 64-bit version, for example).

          – Wikwocket
          Sep 26 '10 at 2:00






        • 7





          Specifically, it's the size of the accumulator register. However, don't rely on this number to tell you much - 90% of programs will see little-to-no benefit jumping from a 32- to a 64-bit processor. The exceptions are programs which must do complex calculations on large sets of data, such as video encoding.

          – BlueRaja - Danny Pflughoeft
          Sep 26 '10 at 4:18






        • 2





          Yet another amazing answer by Grace Note. :)

          – Kevin Yap
          Sep 26 '10 at 5:43






        • 9





          @kirk For one reason or another, the "8-bit generation" doesn't start with the intro of 8-bit processors. It probably was a retro-active name: the 16-bit generation was defined by it, but the previous generation heralded by the Famicom was typically considered separate from the Atari 5200 generation. So in the 16-bit generation was named the 16-bit, they simply called the previous one 8-bit at the cost of accuracy.

          – Grace Note
          Nov 8 '10 at 1:07

















        An example of such "intentional retro": megaman.capcom.com/10

        – Raven Dreamer
        Sep 25 '10 at 22:49







        An example of such "intentional retro": megaman.capcom.com/10

        – Raven Dreamer
        Sep 25 '10 at 22:49






        3




        3





        To give an idea of where we stand today, gaming consoles have used 64-bit processors since the Atari Jaguar and Nintendo 64. The XBox 360 sports 3 64-bit processors. 64-bit PC processors are finally popular (you will see Windows Seven 64-bit version, for example).

        – Wikwocket
        Sep 26 '10 at 2:00





        To give an idea of where we stand today, gaming consoles have used 64-bit processors since the Atari Jaguar and Nintendo 64. The XBox 360 sports 3 64-bit processors. 64-bit PC processors are finally popular (you will see Windows Seven 64-bit version, for example).

        – Wikwocket
        Sep 26 '10 at 2:00




        7




        7





        Specifically, it's the size of the accumulator register. However, don't rely on this number to tell you much - 90% of programs will see little-to-no benefit jumping from a 32- to a 64-bit processor. The exceptions are programs which must do complex calculations on large sets of data, such as video encoding.

        – BlueRaja - Danny Pflughoeft
        Sep 26 '10 at 4:18





        Specifically, it's the size of the accumulator register. However, don't rely on this number to tell you much - 90% of programs will see little-to-no benefit jumping from a 32- to a 64-bit processor. The exceptions are programs which must do complex calculations on large sets of data, such as video encoding.

        – BlueRaja - Danny Pflughoeft
        Sep 26 '10 at 4:18




        2




        2





        Yet another amazing answer by Grace Note. :)

        – Kevin Yap
        Sep 26 '10 at 5:43





        Yet another amazing answer by Grace Note. :)

        – Kevin Yap
        Sep 26 '10 at 5:43




        9




        9





        @kirk For one reason or another, the "8-bit generation" doesn't start with the intro of 8-bit processors. It probably was a retro-active name: the 16-bit generation was defined by it, but the previous generation heralded by the Famicom was typically considered separate from the Atari 5200 generation. So in the 16-bit generation was named the 16-bit, they simply called the previous one 8-bit at the cost of accuracy.

        – Grace Note
        Nov 8 '10 at 1:07





        @kirk For one reason or another, the "8-bit generation" doesn't start with the intro of 8-bit processors. It probably was a retro-active name: the 16-bit generation was defined by it, but the previous generation heralded by the Famicom was typically considered separate from the Atari 5200 generation. So in the 16-bit generation was named the 16-bit, they simply called the previous one 8-bit at the cost of accuracy.

        – Grace Note
        Nov 8 '10 at 1:07













        26














        8-bit, 16-bit, 32-bit and 64-bit all refer to a processor's word size. A "word" in processor parlance means the native size of information it can place into a register and process without special instructions. It also refers to the size of the memory address space. The word size of any chip is the most defining aspect of it's design. There are several reasons why it is so important:




        • First off, the maximum value you can hold. An 8-bit integer can hold a value up to 255. A 16-bit int can be up to 65,535.

        • Memory addressing: With bigger numbers, you can track more address space (a gross oversimplification, but it holds true).

        • Double-words and quad-words. There are cases when you want to use a larger word for a variable. A double word is just 2 words, so a 32-bit variable on a 16-bit machine or a 16-bit variable on an 8-bit machine.

        • Instructions. Again, with a larger number you can have more opcodes (the actual machine instructions). Even though adding 2 integers looks simple, on the hardware level even that is quite complicated. For instance a machine may have separate MOV instructions for loading a nibble (half-byte), byte, word, double word or quad word into a register. From there you would need to add it to another register or add from a variable in memory, and that's another set of possible instructions. Floating point instructions are also a completely separate set of instructions.


          • Aside from not having the memory, an 8-bit machine usually has a separate processor for handling floating point math on the hardware. 16-bit machines usually have an integrated floating point unit to handle that.

          • With a larger word size you can put in more specialized instructions, like specialized direct hardware access, built-in functions (hardware graphics processing for example), hardware memory management, etc.



        • Memory management: With a bigger word comes the possibility of being able to address more memory. Many 8 and 16-bit machines used a variety of schemes to be able to address as much memory as possible, often exceeding the limitations of their word size. Your typical 32 & 64-bit personal computer CPUs use memory registers that are equal to their word size giving them access to 4,294,967,296 and 18,446,744,073,709,551,616 bytes, respectively.


        TL;DR



        The difference in word size has a dramatic impact on the capabilities and performance of a given chip. Once you get up to 32-bits, the differences mainly become those of refinement (unless you are running a really big application, like genetic analysis or counting all the stars in the galaxy big).



        I hope this ramble of an answer is of some help.



        Further Reading




        • https://en.wikipedia.org/wiki/Word_(computer_architecture)






        share|improve this answer





















        • 4





          Regarding the first bullet point: An 8-bit integer can hold values up to 255 if it is unsigned. Same for 16-bit, except not 65,536, but 65,535. It's 65,536 different values (including zero, so 65,535 is the max).

          – Victor Zamanian
          Oct 25 '12 at 22:31











        • @VictorZamanian I always get the 16-bit int max off by 1. I'm also not going to get into signed versus unsigned in this statement.

          – CyberSkull
          Nov 1 '12 at 16:28








        • 1





          This, incidentally, is why you can only get as many as 255 rupees in Legend of Zelda, and why 255/254 is often the (seemingly random) hard cap on values in 8-bit games.

          – Zibbobz
          Mar 13 '14 at 16:58






        • 1





          @ZibbobzL Several games max out at 65535 money.

          – Mooing Duck
          Oct 20 '14 at 16:47
















        26














        8-bit, 16-bit, 32-bit and 64-bit all refer to a processor's word size. A "word" in processor parlance means the native size of information it can place into a register and process without special instructions. It also refers to the size of the memory address space. The word size of any chip is the most defining aspect of it's design. There are several reasons why it is so important:




        • First off, the maximum value you can hold. An 8-bit integer can hold a value up to 255. A 16-bit int can be up to 65,535.

        • Memory addressing: With bigger numbers, you can track more address space (a gross oversimplification, but it holds true).

        • Double-words and quad-words. There are cases when you want to use a larger word for a variable. A double word is just 2 words, so a 32-bit variable on a 16-bit machine or a 16-bit variable on an 8-bit machine.

        • Instructions. Again, with a larger number you can have more opcodes (the actual machine instructions). Even though adding 2 integers looks simple, on the hardware level even that is quite complicated. For instance a machine may have separate MOV instructions for loading a nibble (half-byte), byte, word, double word or quad word into a register. From there you would need to add it to another register or add from a variable in memory, and that's another set of possible instructions. Floating point instructions are also a completely separate set of instructions.


          • Aside from not having the memory, an 8-bit machine usually has a separate processor for handling floating point math on the hardware. 16-bit machines usually have an integrated floating point unit to handle that.

          • With a larger word size you can put in more specialized instructions, like specialized direct hardware access, built-in functions (hardware graphics processing for example), hardware memory management, etc.



        • Memory management: With a bigger word comes the possibility of being able to address more memory. Many 8 and 16-bit machines used a variety of schemes to be able to address as much memory as possible, often exceeding the limitations of their word size. Your typical 32 & 64-bit personal computer CPUs use memory registers that are equal to their word size giving them access to 4,294,967,296 and 18,446,744,073,709,551,616 bytes, respectively.


        TL;DR



        The difference in word size has a dramatic impact on the capabilities and performance of a given chip. Once you get up to 32-bits, the differences mainly become those of refinement (unless you are running a really big application, like genetic analysis or counting all the stars in the galaxy big).



        I hope this ramble of an answer is of some help.



        Further Reading




        • https://en.wikipedia.org/wiki/Word_(computer_architecture)






        share|improve this answer





















        • 4





          Regarding the first bullet point: An 8-bit integer can hold values up to 255 if it is unsigned. Same for 16-bit, except not 65,536, but 65,535. It's 65,536 different values (including zero, so 65,535 is the max).

          – Victor Zamanian
          Oct 25 '12 at 22:31











        • @VictorZamanian I always get the 16-bit int max off by 1. I'm also not going to get into signed versus unsigned in this statement.

          – CyberSkull
          Nov 1 '12 at 16:28








        • 1





          This, incidentally, is why you can only get as many as 255 rupees in Legend of Zelda, and why 255/254 is often the (seemingly random) hard cap on values in 8-bit games.

          – Zibbobz
          Mar 13 '14 at 16:58






        • 1





          @ZibbobzL Several games max out at 65535 money.

          – Mooing Duck
          Oct 20 '14 at 16:47














        26












        26








        26







        8-bit, 16-bit, 32-bit and 64-bit all refer to a processor's word size. A "word" in processor parlance means the native size of information it can place into a register and process without special instructions. It also refers to the size of the memory address space. The word size of any chip is the most defining aspect of it's design. There are several reasons why it is so important:




        • First off, the maximum value you can hold. An 8-bit integer can hold a value up to 255. A 16-bit int can be up to 65,535.

        • Memory addressing: With bigger numbers, you can track more address space (a gross oversimplification, but it holds true).

        • Double-words and quad-words. There are cases when you want to use a larger word for a variable. A double word is just 2 words, so a 32-bit variable on a 16-bit machine or a 16-bit variable on an 8-bit machine.

        • Instructions. Again, with a larger number you can have more opcodes (the actual machine instructions). Even though adding 2 integers looks simple, on the hardware level even that is quite complicated. For instance a machine may have separate MOV instructions for loading a nibble (half-byte), byte, word, double word or quad word into a register. From there you would need to add it to another register or add from a variable in memory, and that's another set of possible instructions. Floating point instructions are also a completely separate set of instructions.


          • Aside from not having the memory, an 8-bit machine usually has a separate processor for handling floating point math on the hardware. 16-bit machines usually have an integrated floating point unit to handle that.

          • With a larger word size you can put in more specialized instructions, like specialized direct hardware access, built-in functions (hardware graphics processing for example), hardware memory management, etc.



        • Memory management: With a bigger word comes the possibility of being able to address more memory. Many 8 and 16-bit machines used a variety of schemes to be able to address as much memory as possible, often exceeding the limitations of their word size. Your typical 32 & 64-bit personal computer CPUs use memory registers that are equal to their word size giving them access to 4,294,967,296 and 18,446,744,073,709,551,616 bytes, respectively.


        TL;DR



        The difference in word size has a dramatic impact on the capabilities and performance of a given chip. Once you get up to 32-bits, the differences mainly become those of refinement (unless you are running a really big application, like genetic analysis or counting all the stars in the galaxy big).



        I hope this ramble of an answer is of some help.



        Further Reading




        • https://en.wikipedia.org/wiki/Word_(computer_architecture)






        share|improve this answer















        8-bit, 16-bit, 32-bit and 64-bit all refer to a processor's word size. A "word" in processor parlance means the native size of information it can place into a register and process without special instructions. It also refers to the size of the memory address space. The word size of any chip is the most defining aspect of it's design. There are several reasons why it is so important:




        • First off, the maximum value you can hold. An 8-bit integer can hold a value up to 255. A 16-bit int can be up to 65,535.

        • Memory addressing: With bigger numbers, you can track more address space (a gross oversimplification, but it holds true).

        • Double-words and quad-words. There are cases when you want to use a larger word for a variable. A double word is just 2 words, so a 32-bit variable on a 16-bit machine or a 16-bit variable on an 8-bit machine.

        • Instructions. Again, with a larger number you can have more opcodes (the actual machine instructions). Even though adding 2 integers looks simple, on the hardware level even that is quite complicated. For instance a machine may have separate MOV instructions for loading a nibble (half-byte), byte, word, double word or quad word into a register. From there you would need to add it to another register or add from a variable in memory, and that's another set of possible instructions. Floating point instructions are also a completely separate set of instructions.


          • Aside from not having the memory, an 8-bit machine usually has a separate processor for handling floating point math on the hardware. 16-bit machines usually have an integrated floating point unit to handle that.

          • With a larger word size you can put in more specialized instructions, like specialized direct hardware access, built-in functions (hardware graphics processing for example), hardware memory management, etc.



        • Memory management: With a bigger word comes the possibility of being able to address more memory. Many 8 and 16-bit machines used a variety of schemes to be able to address as much memory as possible, often exceeding the limitations of their word size. Your typical 32 & 64-bit personal computer CPUs use memory registers that are equal to their word size giving them access to 4,294,967,296 and 18,446,744,073,709,551,616 bytes, respectively.


        TL;DR



        The difference in word size has a dramatic impact on the capabilities and performance of a given chip. Once you get up to 32-bits, the differences mainly become those of refinement (unless you are running a really big application, like genetic analysis or counting all the stars in the galaxy big).



        I hope this ramble of an answer is of some help.



        Further Reading




        • https://en.wikipedia.org/wiki/Word_(computer_architecture)







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited 3 mins ago

























        answered Sep 20 '11 at 15:11









        CyberSkullCyberSkull

        14.5k2085152




        14.5k2085152








        • 4





          Regarding the first bullet point: An 8-bit integer can hold values up to 255 if it is unsigned. Same for 16-bit, except not 65,536, but 65,535. It's 65,536 different values (including zero, so 65,535 is the max).

          – Victor Zamanian
          Oct 25 '12 at 22:31











        • @VictorZamanian I always get the 16-bit int max off by 1. I'm also not going to get into signed versus unsigned in this statement.

          – CyberSkull
          Nov 1 '12 at 16:28








        • 1





          This, incidentally, is why you can only get as many as 255 rupees in Legend of Zelda, and why 255/254 is often the (seemingly random) hard cap on values in 8-bit games.

          – Zibbobz
          Mar 13 '14 at 16:58






        • 1





          @ZibbobzL Several games max out at 65535 money.

          – Mooing Duck
          Oct 20 '14 at 16:47














        • 4





          Regarding the first bullet point: An 8-bit integer can hold values up to 255 if it is unsigned. Same for 16-bit, except not 65,536, but 65,535. It's 65,536 different values (including zero, so 65,535 is the max).

          – Victor Zamanian
          Oct 25 '12 at 22:31











        • @VictorZamanian I always get the 16-bit int max off by 1. I'm also not going to get into signed versus unsigned in this statement.

          – CyberSkull
          Nov 1 '12 at 16:28








        • 1





          This, incidentally, is why you can only get as many as 255 rupees in Legend of Zelda, and why 255/254 is often the (seemingly random) hard cap on values in 8-bit games.

          – Zibbobz
          Mar 13 '14 at 16:58






        • 1





          @ZibbobzL Several games max out at 65535 money.

          – Mooing Duck
          Oct 20 '14 at 16:47








        4




        4





        Regarding the first bullet point: An 8-bit integer can hold values up to 255 if it is unsigned. Same for 16-bit, except not 65,536, but 65,535. It's 65,536 different values (including zero, so 65,535 is the max).

        – Victor Zamanian
        Oct 25 '12 at 22:31





        Regarding the first bullet point: An 8-bit integer can hold values up to 255 if it is unsigned. Same for 16-bit, except not 65,536, but 65,535. It's 65,536 different values (including zero, so 65,535 is the max).

        – Victor Zamanian
        Oct 25 '12 at 22:31













        @VictorZamanian I always get the 16-bit int max off by 1. I'm also not going to get into signed versus unsigned in this statement.

        – CyberSkull
        Nov 1 '12 at 16:28







        @VictorZamanian I always get the 16-bit int max off by 1. I'm also not going to get into signed versus unsigned in this statement.

        – CyberSkull
        Nov 1 '12 at 16:28






        1




        1





        This, incidentally, is why you can only get as many as 255 rupees in Legend of Zelda, and why 255/254 is often the (seemingly random) hard cap on values in 8-bit games.

        – Zibbobz
        Mar 13 '14 at 16:58





        This, incidentally, is why you can only get as many as 255 rupees in Legend of Zelda, and why 255/254 is often the (seemingly random) hard cap on values in 8-bit games.

        – Zibbobz
        Mar 13 '14 at 16:58




        1




        1





        @ZibbobzL Several games max out at 65535 money.

        – Mooing Duck
        Oct 20 '14 at 16:47





        @ZibbobzL Several games max out at 65535 money.

        – Mooing Duck
        Oct 20 '14 at 16:47











        18














        The term "8-bit graphics" literally means that every pixel uses 8 bits for storing the color value - so only 256 options. Modern systems use 8 bits to store each color channel, so every pixel typically uses 24 bits.



        There's nothing preventing modern games from limiting themselves to a stricter, 8-bit color palette; but the term is often used to describe old games in which using 8 bits per pixel was necessary.






        share|improve this answer



















        • 4





          Specifically, 8-bit color yields a 256-color palette, 16-bit color would be 64k colors, and the modern 24-bit palette supports 16 million colors.

          – Wikwocket
          Sep 26 '10 at 1:55






        • 3





          @Wikwocket: Sometimes, you'll hear reference to 32-bit graphics, which are just 24-bit graphics with an 8-bit transparency setting.

          – user2974
          Sep 26 '10 at 16:11






        • 7





          But the kind of graphics generally called "8-bit" are those associated with games of the NES era, where each tile used only four (or three+transparent for sprites) colors selected from a larger palette - the graphics themselves weren't "8-bit" in any sense.

          – Random832
          May 30 '11 at 17:52






        • 2





          @Random832 well those games may have been called 8-bit, but my answer explains what 8-bit graphics literally means. Maybe those games were called 8-bit on account of the processor, as Grace Note mentioned in the other answer.

          – Oak
          May 30 '11 at 18:49






        • 2





          The SNES could have up to 256 colors on screen out of a palette of 65k colors, the NES had something like 16-24 colors max on screen.

          – CyberSkull
          Sep 20 '11 at 14:54
















        18














        The term "8-bit graphics" literally means that every pixel uses 8 bits for storing the color value - so only 256 options. Modern systems use 8 bits to store each color channel, so every pixel typically uses 24 bits.



        There's nothing preventing modern games from limiting themselves to a stricter, 8-bit color palette; but the term is often used to describe old games in which using 8 bits per pixel was necessary.






        share|improve this answer



















        • 4





          Specifically, 8-bit color yields a 256-color palette, 16-bit color would be 64k colors, and the modern 24-bit palette supports 16 million colors.

          – Wikwocket
          Sep 26 '10 at 1:55






        • 3





          @Wikwocket: Sometimes, you'll hear reference to 32-bit graphics, which are just 24-bit graphics with an 8-bit transparency setting.

          – user2974
          Sep 26 '10 at 16:11






        • 7





          But the kind of graphics generally called "8-bit" are those associated with games of the NES era, where each tile used only four (or three+transparent for sprites) colors selected from a larger palette - the graphics themselves weren't "8-bit" in any sense.

          – Random832
          May 30 '11 at 17:52






        • 2





          @Random832 well those games may have been called 8-bit, but my answer explains what 8-bit graphics literally means. Maybe those games were called 8-bit on account of the processor, as Grace Note mentioned in the other answer.

          – Oak
          May 30 '11 at 18:49






        • 2





          The SNES could have up to 256 colors on screen out of a palette of 65k colors, the NES had something like 16-24 colors max on screen.

          – CyberSkull
          Sep 20 '11 at 14:54














        18












        18








        18







        The term "8-bit graphics" literally means that every pixel uses 8 bits for storing the color value - so only 256 options. Modern systems use 8 bits to store each color channel, so every pixel typically uses 24 bits.



        There's nothing preventing modern games from limiting themselves to a stricter, 8-bit color palette; but the term is often used to describe old games in which using 8 bits per pixel was necessary.






        share|improve this answer













        The term "8-bit graphics" literally means that every pixel uses 8 bits for storing the color value - so only 256 options. Modern systems use 8 bits to store each color channel, so every pixel typically uses 24 bits.



        There's nothing preventing modern games from limiting themselves to a stricter, 8-bit color palette; but the term is often used to describe old games in which using 8 bits per pixel was necessary.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Sep 25 '10 at 22:42









        OakOak

        44.6k62247408




        44.6k62247408








        • 4





          Specifically, 8-bit color yields a 256-color palette, 16-bit color would be 64k colors, and the modern 24-bit palette supports 16 million colors.

          – Wikwocket
          Sep 26 '10 at 1:55






        • 3





          @Wikwocket: Sometimes, you'll hear reference to 32-bit graphics, which are just 24-bit graphics with an 8-bit transparency setting.

          – user2974
          Sep 26 '10 at 16:11






        • 7





          But the kind of graphics generally called "8-bit" are those associated with games of the NES era, where each tile used only four (or three+transparent for sprites) colors selected from a larger palette - the graphics themselves weren't "8-bit" in any sense.

          – Random832
          May 30 '11 at 17:52






        • 2





          @Random832 well those games may have been called 8-bit, but my answer explains what 8-bit graphics literally means. Maybe those games were called 8-bit on account of the processor, as Grace Note mentioned in the other answer.

          – Oak
          May 30 '11 at 18:49






        • 2





          The SNES could have up to 256 colors on screen out of a palette of 65k colors, the NES had something like 16-24 colors max on screen.

          – CyberSkull
          Sep 20 '11 at 14:54














        • 4





          Specifically, 8-bit color yields a 256-color palette, 16-bit color would be 64k colors, and the modern 24-bit palette supports 16 million colors.

          – Wikwocket
          Sep 26 '10 at 1:55






        • 3





          @Wikwocket: Sometimes, you'll hear reference to 32-bit graphics, which are just 24-bit graphics with an 8-bit transparency setting.

          – user2974
          Sep 26 '10 at 16:11






        • 7





          But the kind of graphics generally called "8-bit" are those associated with games of the NES era, where each tile used only four (or three+transparent for sprites) colors selected from a larger palette - the graphics themselves weren't "8-bit" in any sense.

          – Random832
          May 30 '11 at 17:52






        • 2





          @Random832 well those games may have been called 8-bit, but my answer explains what 8-bit graphics literally means. Maybe those games were called 8-bit on account of the processor, as Grace Note mentioned in the other answer.

          – Oak
          May 30 '11 at 18:49






        • 2





          The SNES could have up to 256 colors on screen out of a palette of 65k colors, the NES had something like 16-24 colors max on screen.

          – CyberSkull
          Sep 20 '11 at 14:54








        4




        4





        Specifically, 8-bit color yields a 256-color palette, 16-bit color would be 64k colors, and the modern 24-bit palette supports 16 million colors.

        – Wikwocket
        Sep 26 '10 at 1:55





        Specifically, 8-bit color yields a 256-color palette, 16-bit color would be 64k colors, and the modern 24-bit palette supports 16 million colors.

        – Wikwocket
        Sep 26 '10 at 1:55




        3




        3





        @Wikwocket: Sometimes, you'll hear reference to 32-bit graphics, which are just 24-bit graphics with an 8-bit transparency setting.

        – user2974
        Sep 26 '10 at 16:11





        @Wikwocket: Sometimes, you'll hear reference to 32-bit graphics, which are just 24-bit graphics with an 8-bit transparency setting.

        – user2974
        Sep 26 '10 at 16:11




        7




        7





        But the kind of graphics generally called "8-bit" are those associated with games of the NES era, where each tile used only four (or three+transparent for sprites) colors selected from a larger palette - the graphics themselves weren't "8-bit" in any sense.

        – Random832
        May 30 '11 at 17:52





        But the kind of graphics generally called "8-bit" are those associated with games of the NES era, where each tile used only four (or three+transparent for sprites) colors selected from a larger palette - the graphics themselves weren't "8-bit" in any sense.

        – Random832
        May 30 '11 at 17:52




        2




        2





        @Random832 well those games may have been called 8-bit, but my answer explains what 8-bit graphics literally means. Maybe those games were called 8-bit on account of the processor, as Grace Note mentioned in the other answer.

        – Oak
        May 30 '11 at 18:49





        @Random832 well those games may have been called 8-bit, but my answer explains what 8-bit graphics literally means. Maybe those games were called 8-bit on account of the processor, as Grace Note mentioned in the other answer.

        – Oak
        May 30 '11 at 18:49




        2




        2





        The SNES could have up to 256 colors on screen out of a palette of 65k colors, the NES had something like 16-24 colors max on screen.

        – CyberSkull
        Sep 20 '11 at 14:54





        The SNES could have up to 256 colors on screen out of a palette of 65k colors, the NES had something like 16-24 colors max on screen.

        – CyberSkull
        Sep 20 '11 at 14:54











        6














        Way back in the day the bit size of a CPU was a reference to how wide the processors registers where. A CPU typically has several registers in which you can move data around and do operations on it. For example add 2 numbers together then store the results in another register. In the 8 bit era the registers were 8 bits wide and of you had a big number like 4000 it wouldn't fit in a single register so your would have to do two operations to simulate a 16 bit operation. For example if you have got 10,000 gold coins you would need to use to add instructions to add them together. One to handle the lower 8bits and another to add the upper 8bits(With carrying taken into account). Where as a 16bit system could have just done it in one operation. You may remember in the legend of Zelda you would max out at 255 rupees as its the largest unsigned 8bit number possible.



        Nowadays registers in a CPU come in all different sizes so this isn't really good measure anymore. For example the SSE Registers in the amd64 processors of today are 256 bits wide(For real) but the processors are still considered 64 bit. Lately these days most people are thinking of the addressing size the CPU is capable of supporting. It seems the bit size of a machine is really based on the current trends of hardware of the time. But for me I still consider the size of a native integer register which seems correct even today and still matches the addressing size of the CPU as well. Which makes since since the native integer size of a register is typically the same size as a memory pointer.






        share|improve this answer






























          6














          Way back in the day the bit size of a CPU was a reference to how wide the processors registers where. A CPU typically has several registers in which you can move data around and do operations on it. For example add 2 numbers together then store the results in another register. In the 8 bit era the registers were 8 bits wide and of you had a big number like 4000 it wouldn't fit in a single register so your would have to do two operations to simulate a 16 bit operation. For example if you have got 10,000 gold coins you would need to use to add instructions to add them together. One to handle the lower 8bits and another to add the upper 8bits(With carrying taken into account). Where as a 16bit system could have just done it in one operation. You may remember in the legend of Zelda you would max out at 255 rupees as its the largest unsigned 8bit number possible.



          Nowadays registers in a CPU come in all different sizes so this isn't really good measure anymore. For example the SSE Registers in the amd64 processors of today are 256 bits wide(For real) but the processors are still considered 64 bit. Lately these days most people are thinking of the addressing size the CPU is capable of supporting. It seems the bit size of a machine is really based on the current trends of hardware of the time. But for me I still consider the size of a native integer register which seems correct even today and still matches the addressing size of the CPU as well. Which makes since since the native integer size of a register is typically the same size as a memory pointer.






          share|improve this answer




























            6












            6








            6







            Way back in the day the bit size of a CPU was a reference to how wide the processors registers where. A CPU typically has several registers in which you can move data around and do operations on it. For example add 2 numbers together then store the results in another register. In the 8 bit era the registers were 8 bits wide and of you had a big number like 4000 it wouldn't fit in a single register so your would have to do two operations to simulate a 16 bit operation. For example if you have got 10,000 gold coins you would need to use to add instructions to add them together. One to handle the lower 8bits and another to add the upper 8bits(With carrying taken into account). Where as a 16bit system could have just done it in one operation. You may remember in the legend of Zelda you would max out at 255 rupees as its the largest unsigned 8bit number possible.



            Nowadays registers in a CPU come in all different sizes so this isn't really good measure anymore. For example the SSE Registers in the amd64 processors of today are 256 bits wide(For real) but the processors are still considered 64 bit. Lately these days most people are thinking of the addressing size the CPU is capable of supporting. It seems the bit size of a machine is really based on the current trends of hardware of the time. But for me I still consider the size of a native integer register which seems correct even today and still matches the addressing size of the CPU as well. Which makes since since the native integer size of a register is typically the same size as a memory pointer.






            share|improve this answer















            Way back in the day the bit size of a CPU was a reference to how wide the processors registers where. A CPU typically has several registers in which you can move data around and do operations on it. For example add 2 numbers together then store the results in another register. In the 8 bit era the registers were 8 bits wide and of you had a big number like 4000 it wouldn't fit in a single register so your would have to do two operations to simulate a 16 bit operation. For example if you have got 10,000 gold coins you would need to use to add instructions to add them together. One to handle the lower 8bits and another to add the upper 8bits(With carrying taken into account). Where as a 16bit system could have just done it in one operation. You may remember in the legend of Zelda you would max out at 255 rupees as its the largest unsigned 8bit number possible.



            Nowadays registers in a CPU come in all different sizes so this isn't really good measure anymore. For example the SSE Registers in the amd64 processors of today are 256 bits wide(For real) but the processors are still considered 64 bit. Lately these days most people are thinking of the addressing size the CPU is capable of supporting. It seems the bit size of a machine is really based on the current trends of hardware of the time. But for me I still consider the size of a native integer register which seems correct even today and still matches the addressing size of the CPU as well. Which makes since since the native integer size of a register is typically the same size as a memory pointer.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Aug 15 '12 at 16:57









            Frank

            19.7k2188136




            19.7k2188136










            answered Aug 15 '12 at 16:51









            carloscarlos

            6911




            6911























                6














                In addition to Oak's answer, the 8 bits for graphic not only limit1 the color palette, but also the screen resolution to a maximum of 256 in each direction (e.g. the NES has 256x240 pixels of which 256x224 are typically visible). For sprite graphics you need to split these 8 bit, e.g. to obtain 32 = 2⁵ different x-positions and 16 = 2⁴ different y-positions, you have 8x16 (2³x2⁴) pixels left for a sprite's resolution. That is why you get that typical pixel look.



                The same applies for music, 8 bit means a maximum of 256 levels of your sound output level (per sample, the temporal resolution is another issue), which is too coarse to provide sounds that do not sound Chiptune (or noisy, if still trying PCM sound) to the human ear. 16 bits per sample is what the CD standard uses, by the way. But 16 bit music more refers to Tracker music, whose limits are similar to those of popular game consoles with a 16 bit processor.



                Another interesting point is that an 8 bit input device is limited1 to 8 boolean button states split up into the four directions of the D-pad plus four buttons. Or a 2 button joystick with 3 bits (a mere 8 levels, including the sign!) remaining for both the x- and y-axis.



                So, for originally old games, 8 bit / 16 bit might be considered referring to the system's capabilities (but consider Grace's point about the incosistency in the label "8 bit"). For a retro game, consider the question whether it would be theoretically possible to obey the mentioned constraints (neglecting shader effects like Bloom), although you might have to allow some "cheating" - I'd consider a sprite based game using 8x16 squares sprites still 8 bit even if sprites could be floating at any position in HD resolution and the squares were 16x16 pixels each...





                1) well obviously you can use 2 times 8 bit to circumvent that limit, but as BlueRaja points out in a comment on Grace's answer, considering the accumulator register to be 8 bit only as well, that would cause a performance loss. Also, it would be cheating your way to 16 bit IMHO






                share|improve this answer






























                  6














                  In addition to Oak's answer, the 8 bits for graphic not only limit1 the color palette, but also the screen resolution to a maximum of 256 in each direction (e.g. the NES has 256x240 pixels of which 256x224 are typically visible). For sprite graphics you need to split these 8 bit, e.g. to obtain 32 = 2⁵ different x-positions and 16 = 2⁴ different y-positions, you have 8x16 (2³x2⁴) pixels left for a sprite's resolution. That is why you get that typical pixel look.



                  The same applies for music, 8 bit means a maximum of 256 levels of your sound output level (per sample, the temporal resolution is another issue), which is too coarse to provide sounds that do not sound Chiptune (or noisy, if still trying PCM sound) to the human ear. 16 bits per sample is what the CD standard uses, by the way. But 16 bit music more refers to Tracker music, whose limits are similar to those of popular game consoles with a 16 bit processor.



                  Another interesting point is that an 8 bit input device is limited1 to 8 boolean button states split up into the four directions of the D-pad plus four buttons. Or a 2 button joystick with 3 bits (a mere 8 levels, including the sign!) remaining for both the x- and y-axis.



                  So, for originally old games, 8 bit / 16 bit might be considered referring to the system's capabilities (but consider Grace's point about the incosistency in the label "8 bit"). For a retro game, consider the question whether it would be theoretically possible to obey the mentioned constraints (neglecting shader effects like Bloom), although you might have to allow some "cheating" - I'd consider a sprite based game using 8x16 squares sprites still 8 bit even if sprites could be floating at any position in HD resolution and the squares were 16x16 pixels each...





                  1) well obviously you can use 2 times 8 bit to circumvent that limit, but as BlueRaja points out in a comment on Grace's answer, considering the accumulator register to be 8 bit only as well, that would cause a performance loss. Also, it would be cheating your way to 16 bit IMHO






                  share|improve this answer




























                    6












                    6








                    6







                    In addition to Oak's answer, the 8 bits for graphic not only limit1 the color palette, but also the screen resolution to a maximum of 256 in each direction (e.g. the NES has 256x240 pixels of which 256x224 are typically visible). For sprite graphics you need to split these 8 bit, e.g. to obtain 32 = 2⁵ different x-positions and 16 = 2⁴ different y-positions, you have 8x16 (2³x2⁴) pixels left for a sprite's resolution. That is why you get that typical pixel look.



                    The same applies for music, 8 bit means a maximum of 256 levels of your sound output level (per sample, the temporal resolution is another issue), which is too coarse to provide sounds that do not sound Chiptune (or noisy, if still trying PCM sound) to the human ear. 16 bits per sample is what the CD standard uses, by the way. But 16 bit music more refers to Tracker music, whose limits are similar to those of popular game consoles with a 16 bit processor.



                    Another interesting point is that an 8 bit input device is limited1 to 8 boolean button states split up into the four directions of the D-pad plus four buttons. Or a 2 button joystick with 3 bits (a mere 8 levels, including the sign!) remaining for both the x- and y-axis.



                    So, for originally old games, 8 bit / 16 bit might be considered referring to the system's capabilities (but consider Grace's point about the incosistency in the label "8 bit"). For a retro game, consider the question whether it would be theoretically possible to obey the mentioned constraints (neglecting shader effects like Bloom), although you might have to allow some "cheating" - I'd consider a sprite based game using 8x16 squares sprites still 8 bit even if sprites could be floating at any position in HD resolution and the squares were 16x16 pixels each...





                    1) well obviously you can use 2 times 8 bit to circumvent that limit, but as BlueRaja points out in a comment on Grace's answer, considering the accumulator register to be 8 bit only as well, that would cause a performance loss. Also, it would be cheating your way to 16 bit IMHO






                    share|improve this answer















                    In addition to Oak's answer, the 8 bits for graphic not only limit1 the color palette, but also the screen resolution to a maximum of 256 in each direction (e.g. the NES has 256x240 pixels of which 256x224 are typically visible). For sprite graphics you need to split these 8 bit, e.g. to obtain 32 = 2⁵ different x-positions and 16 = 2⁴ different y-positions, you have 8x16 (2³x2⁴) pixels left for a sprite's resolution. That is why you get that typical pixel look.



                    The same applies for music, 8 bit means a maximum of 256 levels of your sound output level (per sample, the temporal resolution is another issue), which is too coarse to provide sounds that do not sound Chiptune (or noisy, if still trying PCM sound) to the human ear. 16 bits per sample is what the CD standard uses, by the way. But 16 bit music more refers to Tracker music, whose limits are similar to those of popular game consoles with a 16 bit processor.



                    Another interesting point is that an 8 bit input device is limited1 to 8 boolean button states split up into the four directions of the D-pad plus four buttons. Or a 2 button joystick with 3 bits (a mere 8 levels, including the sign!) remaining for both the x- and y-axis.



                    So, for originally old games, 8 bit / 16 bit might be considered referring to the system's capabilities (but consider Grace's point about the incosistency in the label "8 bit"). For a retro game, consider the question whether it would be theoretically possible to obey the mentioned constraints (neglecting shader effects like Bloom), although you might have to allow some "cheating" - I'd consider a sprite based game using 8x16 squares sprites still 8 bit even if sprites could be floating at any position in HD resolution and the squares were 16x16 pixels each...





                    1) well obviously you can use 2 times 8 bit to circumvent that limit, but as BlueRaja points out in a comment on Grace's answer, considering the accumulator register to be 8 bit only as well, that would cause a performance loss. Also, it would be cheating your way to 16 bit IMHO







                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited Apr 13 '17 at 12:09









                    Community

                    1




                    1










                    answered Sep 17 '11 at 10:40









                    ZommuterZommuter

                    8,6731977137




                    8,6731977137























                        0














                        Despite all the interesting technical discussions provided by other contributors, the 8-bit and 16-bit descriptors for gaming consoles don't mean anything consistently. Effectively, 16-bit is only meaningful as a marketing term.



                        Briefly, in word size:




                        • The Super Nintendo uses the RA55 CPU which has 16 bit index registers, and opcodes which can process 16 bit numbers into a 16 bit accumulator, but it doesn't have the 16 bit register values we might associate with a typical 16-bit processor. I suppose this is a 16-bit word size in 650x terms, but it's a strange terminology to me. I might rather say the RA55 instruction set supports 16-bit value operations. The 68c816 documentation does not in any location define words as any particular size.

                        • The Turbo Grafx 16 doesn't have native 16 bit operations, nor a 16 bit accumulator to store them in. Like the Super Nintendo, this is a 650x family CPU, but this one only supports 8 bit operations and has only 8-bit registers. If it has a word size, it is 8-bit.

                        • The Genesis/Mega Drive with the Motorola 68000 offers 32 bit word sizes (with 32 bit registers, and 32 bit operations) but was marketed with "16-bit" in the molded plastic. As a relatively new 32-bit cpu, and due to historical patterns, the 68k family names a 16-bit value a "word", but has full native support for nearly all operations with 32-bit values named "long". This represents the beginning of the era when "word size" had become a legacy concept. Previously, there were architectures with things like 9 bit words, or 11 bit words. From here on, word size becomes most commonly "two 8-bit bytes".


                        In addressing space:



                        Most 8-bit consoles had 16-bit physical addressing space (256 bytes wouldn't get you very far.) They used segmenting schemes but so did the Turbo Grafx 16. The Genesis had a cpu capable of 32-bit addressing.



                        In data bus:



                        The Turbo Grafx 16 and the Super Nintendo had an 8 bit data bus. The Genesis/Mega Drive had a 16 bit data bus.



                        In color Depth:



                        Total possible color palette is owned by the graphics circuitry and however the palette table is expressed is for its needs. You wouldn't expect this to have much correlation across systems, and it doesn't.




                        • The Super Nintendo had 15 bit palette space, and 8 bits of space to select colors out of the that space.

                        • The Genesis had a 9 bit palette space, with essentially 6 bits of space to select colors out of that space.

                        • The Turbo Grafx 16 also had a 9 bit palette space with a complicated scheme of many simultaneous palettes all of which were 4 bit.


                        This doesn't fully describe graphics capabilities of the systems even in terms of colors, which had other features like special layer features, or specifics of their sprite implementations or other details. However, it does accurately portray the bit depth of the major features.



                        So you can see there are many features of systems which can be measured in bit-size which don't have a requirement to agree, and there is no particular grouping around any feature that is 16-bit for consoles grouped this way. Moreover, there is no reason to expect that consumers would care at all about word size or data paths. You can see that systems with "small" values here were regardless very capable gaming platforms for the time.



                        Essentially "16-bit" is just a generation of consoles which received certain marketing in a certain time period. You can find a lot more commonality between them in terms of overall graphics capability than you can in terms of any specific bitness, and that makes sense because graphics innovation (at a low cost) was the main goal of these designs.



                        "8-bit" was a retroactive identification for the previous consoles. In the US this was a the dominant Nintendo Entertainment System and the less present Sega Master System. Does it apply to an Atari 7800? A 5200? An Intellivision? An atari 2600 or colecovision or an Odyssey 2? Again, there is no bitness boundary that is clear among these consoles. By convention, it probably only includes the consoles introduced from around 1984 to 1988 or so, but this is essentially a term we apply now that was not used then and refers to no particular set of consoles, except by convention.






                        share|improve this answer






























                          0














                          Despite all the interesting technical discussions provided by other contributors, the 8-bit and 16-bit descriptors for gaming consoles don't mean anything consistently. Effectively, 16-bit is only meaningful as a marketing term.



                          Briefly, in word size:




                          • The Super Nintendo uses the RA55 CPU which has 16 bit index registers, and opcodes which can process 16 bit numbers into a 16 bit accumulator, but it doesn't have the 16 bit register values we might associate with a typical 16-bit processor. I suppose this is a 16-bit word size in 650x terms, but it's a strange terminology to me. I might rather say the RA55 instruction set supports 16-bit value operations. The 68c816 documentation does not in any location define words as any particular size.

                          • The Turbo Grafx 16 doesn't have native 16 bit operations, nor a 16 bit accumulator to store them in. Like the Super Nintendo, this is a 650x family CPU, but this one only supports 8 bit operations and has only 8-bit registers. If it has a word size, it is 8-bit.

                          • The Genesis/Mega Drive with the Motorola 68000 offers 32 bit word sizes (with 32 bit registers, and 32 bit operations) but was marketed with "16-bit" in the molded plastic. As a relatively new 32-bit cpu, and due to historical patterns, the 68k family names a 16-bit value a "word", but has full native support for nearly all operations with 32-bit values named "long". This represents the beginning of the era when "word size" had become a legacy concept. Previously, there were architectures with things like 9 bit words, or 11 bit words. From here on, word size becomes most commonly "two 8-bit bytes".


                          In addressing space:



                          Most 8-bit consoles had 16-bit physical addressing space (256 bytes wouldn't get you very far.) They used segmenting schemes but so did the Turbo Grafx 16. The Genesis had a cpu capable of 32-bit addressing.



                          In data bus:



                          The Turbo Grafx 16 and the Super Nintendo had an 8 bit data bus. The Genesis/Mega Drive had a 16 bit data bus.



                          In color Depth:



                          Total possible color palette is owned by the graphics circuitry and however the palette table is expressed is for its needs. You wouldn't expect this to have much correlation across systems, and it doesn't.




                          • The Super Nintendo had 15 bit palette space, and 8 bits of space to select colors out of the that space.

                          • The Genesis had a 9 bit palette space, with essentially 6 bits of space to select colors out of that space.

                          • The Turbo Grafx 16 also had a 9 bit palette space with a complicated scheme of many simultaneous palettes all of which were 4 bit.


                          This doesn't fully describe graphics capabilities of the systems even in terms of colors, which had other features like special layer features, or specifics of their sprite implementations or other details. However, it does accurately portray the bit depth of the major features.



                          So you can see there are many features of systems which can be measured in bit-size which don't have a requirement to agree, and there is no particular grouping around any feature that is 16-bit for consoles grouped this way. Moreover, there is no reason to expect that consumers would care at all about word size or data paths. You can see that systems with "small" values here were regardless very capable gaming platforms for the time.



                          Essentially "16-bit" is just a generation of consoles which received certain marketing in a certain time period. You can find a lot more commonality between them in terms of overall graphics capability than you can in terms of any specific bitness, and that makes sense because graphics innovation (at a low cost) was the main goal of these designs.



                          "8-bit" was a retroactive identification for the previous consoles. In the US this was a the dominant Nintendo Entertainment System and the less present Sega Master System. Does it apply to an Atari 7800? A 5200? An Intellivision? An atari 2600 or colecovision or an Odyssey 2? Again, there is no bitness boundary that is clear among these consoles. By convention, it probably only includes the consoles introduced from around 1984 to 1988 or so, but this is essentially a term we apply now that was not used then and refers to no particular set of consoles, except by convention.






                          share|improve this answer




























                            0












                            0








                            0







                            Despite all the interesting technical discussions provided by other contributors, the 8-bit and 16-bit descriptors for gaming consoles don't mean anything consistently. Effectively, 16-bit is only meaningful as a marketing term.



                            Briefly, in word size:




                            • The Super Nintendo uses the RA55 CPU which has 16 bit index registers, and opcodes which can process 16 bit numbers into a 16 bit accumulator, but it doesn't have the 16 bit register values we might associate with a typical 16-bit processor. I suppose this is a 16-bit word size in 650x terms, but it's a strange terminology to me. I might rather say the RA55 instruction set supports 16-bit value operations. The 68c816 documentation does not in any location define words as any particular size.

                            • The Turbo Grafx 16 doesn't have native 16 bit operations, nor a 16 bit accumulator to store them in. Like the Super Nintendo, this is a 650x family CPU, but this one only supports 8 bit operations and has only 8-bit registers. If it has a word size, it is 8-bit.

                            • The Genesis/Mega Drive with the Motorola 68000 offers 32 bit word sizes (with 32 bit registers, and 32 bit operations) but was marketed with "16-bit" in the molded plastic. As a relatively new 32-bit cpu, and due to historical patterns, the 68k family names a 16-bit value a "word", but has full native support for nearly all operations with 32-bit values named "long". This represents the beginning of the era when "word size" had become a legacy concept. Previously, there were architectures with things like 9 bit words, or 11 bit words. From here on, word size becomes most commonly "two 8-bit bytes".


                            In addressing space:



                            Most 8-bit consoles had 16-bit physical addressing space (256 bytes wouldn't get you very far.) They used segmenting schemes but so did the Turbo Grafx 16. The Genesis had a cpu capable of 32-bit addressing.



                            In data bus:



                            The Turbo Grafx 16 and the Super Nintendo had an 8 bit data bus. The Genesis/Mega Drive had a 16 bit data bus.



                            In color Depth:



                            Total possible color palette is owned by the graphics circuitry and however the palette table is expressed is for its needs. You wouldn't expect this to have much correlation across systems, and it doesn't.




                            • The Super Nintendo had 15 bit palette space, and 8 bits of space to select colors out of the that space.

                            • The Genesis had a 9 bit palette space, with essentially 6 bits of space to select colors out of that space.

                            • The Turbo Grafx 16 also had a 9 bit palette space with a complicated scheme of many simultaneous palettes all of which were 4 bit.


                            This doesn't fully describe graphics capabilities of the systems even in terms of colors, which had other features like special layer features, or specifics of their sprite implementations or other details. However, it does accurately portray the bit depth of the major features.



                            So you can see there are many features of systems which can be measured in bit-size which don't have a requirement to agree, and there is no particular grouping around any feature that is 16-bit for consoles grouped this way. Moreover, there is no reason to expect that consumers would care at all about word size or data paths. You can see that systems with "small" values here were regardless very capable gaming platforms for the time.



                            Essentially "16-bit" is just a generation of consoles which received certain marketing in a certain time period. You can find a lot more commonality between them in terms of overall graphics capability than you can in terms of any specific bitness, and that makes sense because graphics innovation (at a low cost) was the main goal of these designs.



                            "8-bit" was a retroactive identification for the previous consoles. In the US this was a the dominant Nintendo Entertainment System and the less present Sega Master System. Does it apply to an Atari 7800? A 5200? An Intellivision? An atari 2600 or colecovision or an Odyssey 2? Again, there is no bitness boundary that is clear among these consoles. By convention, it probably only includes the consoles introduced from around 1984 to 1988 or so, but this is essentially a term we apply now that was not used then and refers to no particular set of consoles, except by convention.






                            share|improve this answer















                            Despite all the interesting technical discussions provided by other contributors, the 8-bit and 16-bit descriptors for gaming consoles don't mean anything consistently. Effectively, 16-bit is only meaningful as a marketing term.



                            Briefly, in word size:




                            • The Super Nintendo uses the RA55 CPU which has 16 bit index registers, and opcodes which can process 16 bit numbers into a 16 bit accumulator, but it doesn't have the 16 bit register values we might associate with a typical 16-bit processor. I suppose this is a 16-bit word size in 650x terms, but it's a strange terminology to me. I might rather say the RA55 instruction set supports 16-bit value operations. The 68c816 documentation does not in any location define words as any particular size.

                            • The Turbo Grafx 16 doesn't have native 16 bit operations, nor a 16 bit accumulator to store them in. Like the Super Nintendo, this is a 650x family CPU, but this one only supports 8 bit operations and has only 8-bit registers. If it has a word size, it is 8-bit.

                            • The Genesis/Mega Drive with the Motorola 68000 offers 32 bit word sizes (with 32 bit registers, and 32 bit operations) but was marketed with "16-bit" in the molded plastic. As a relatively new 32-bit cpu, and due to historical patterns, the 68k family names a 16-bit value a "word", but has full native support for nearly all operations with 32-bit values named "long". This represents the beginning of the era when "word size" had become a legacy concept. Previously, there were architectures with things like 9 bit words, or 11 bit words. From here on, word size becomes most commonly "two 8-bit bytes".


                            In addressing space:



                            Most 8-bit consoles had 16-bit physical addressing space (256 bytes wouldn't get you very far.) They used segmenting schemes but so did the Turbo Grafx 16. The Genesis had a cpu capable of 32-bit addressing.



                            In data bus:



                            The Turbo Grafx 16 and the Super Nintendo had an 8 bit data bus. The Genesis/Mega Drive had a 16 bit data bus.



                            In color Depth:



                            Total possible color palette is owned by the graphics circuitry and however the palette table is expressed is for its needs. You wouldn't expect this to have much correlation across systems, and it doesn't.




                            • The Super Nintendo had 15 bit palette space, and 8 bits of space to select colors out of the that space.

                            • The Genesis had a 9 bit palette space, with essentially 6 bits of space to select colors out of that space.

                            • The Turbo Grafx 16 also had a 9 bit palette space with a complicated scheme of many simultaneous palettes all of which were 4 bit.


                            This doesn't fully describe graphics capabilities of the systems even in terms of colors, which had other features like special layer features, or specifics of their sprite implementations or other details. However, it does accurately portray the bit depth of the major features.



                            So you can see there are many features of systems which can be measured in bit-size which don't have a requirement to agree, and there is no particular grouping around any feature that is 16-bit for consoles grouped this way. Moreover, there is no reason to expect that consumers would care at all about word size or data paths. You can see that systems with "small" values here were regardless very capable gaming platforms for the time.



                            Essentially "16-bit" is just a generation of consoles which received certain marketing in a certain time period. You can find a lot more commonality between them in terms of overall graphics capability than you can in terms of any specific bitness, and that makes sense because graphics innovation (at a low cost) was the main goal of these designs.



                            "8-bit" was a retroactive identification for the previous consoles. In the US this was a the dominant Nintendo Entertainment System and the less present Sega Master System. Does it apply to an Atari 7800? A 5200? An Intellivision? An atari 2600 or colecovision or an Odyssey 2? Again, there is no bitness boundary that is clear among these consoles. By convention, it probably only includes the consoles introduced from around 1984 to 1988 or so, but this is essentially a term we apply now that was not used then and refers to no particular set of consoles, except by convention.







                            share|improve this answer














                            share|improve this answer



                            share|improve this answer








                            edited Dec 5 '18 at 5:43

























                            answered Dec 4 '18 at 22:54









                            jrodmanjrodman

                            314




                            314























                                -8














                                When talking about the the retro gaming 8bit, 16bit, and 64bit. It simply means the amount of pixels used to create the images for example the NES and Sega Mega Drive are very blocky and has large pixels 8bit the SNES and Sega Genesis improve this to "16 bit" and the N64 masters this concept to 64-bit and so on to 128 to 256 and eventually to 1080 HD. Even though it is and was slightly out of context.



                                Nintendo power in the early 90s actually created these "terms" when they released articles about how Nintendo 8bit power was so much better then Sega. Each to their own but anyways they did this because 99% of the people would have no clue what they were actually talking about.






                                share|improve this answer





















                                • 3





                                  Pixels are nowhere near what 8/16/32/64 or higher bit mean at all.

                                  – Frank
                                  Feb 16 '15 at 5:03






                                • 3





                                  At best they refer to colour palette/depth per pixel; definitely not pixel size. Unless you mean internal storage size, but if so you might want to spend more time on that part explaining just what you mean.

                                  – SevenSidedDie
                                  Feb 16 '15 at 5:07











                                • My old 8-bit Bally system (which had handgun grips for controllers) had a 'pong' game that used more than 8 pixels per paddle. This is positively out in left field.

                                  – Tim Post
                                  Feb 16 '15 at 5:09











                                • The bit count of each generation is certainly tied to the graphical fidelity that could be produced. But but it's by no means what the bit count "means".

                                  – DJ Pirtu
                                  Feb 16 '15 at 7:46











                                • @SevenSidedDie, the NES was approximately 4.6-bit indexed color; clever programming could get 5.75-bit indexed color. The SNES is difficult to quantify, but appears to be 8.6 bit indexed color or 11-bit direct color. The Nintendo 64 and onward use straightforward 24-bit direct color.

                                  – Mark
                                  Sep 24 '15 at 22:23
















                                -8














                                When talking about the the retro gaming 8bit, 16bit, and 64bit. It simply means the amount of pixels used to create the images for example the NES and Sega Mega Drive are very blocky and has large pixels 8bit the SNES and Sega Genesis improve this to "16 bit" and the N64 masters this concept to 64-bit and so on to 128 to 256 and eventually to 1080 HD. Even though it is and was slightly out of context.



                                Nintendo power in the early 90s actually created these "terms" when they released articles about how Nintendo 8bit power was so much better then Sega. Each to their own but anyways they did this because 99% of the people would have no clue what they were actually talking about.






                                share|improve this answer





















                                • 3





                                  Pixels are nowhere near what 8/16/32/64 or higher bit mean at all.

                                  – Frank
                                  Feb 16 '15 at 5:03






                                • 3





                                  At best they refer to colour palette/depth per pixel; definitely not pixel size. Unless you mean internal storage size, but if so you might want to spend more time on that part explaining just what you mean.

                                  – SevenSidedDie
                                  Feb 16 '15 at 5:07











                                • My old 8-bit Bally system (which had handgun grips for controllers) had a 'pong' game that used more than 8 pixels per paddle. This is positively out in left field.

                                  – Tim Post
                                  Feb 16 '15 at 5:09











                                • The bit count of each generation is certainly tied to the graphical fidelity that could be produced. But but it's by no means what the bit count "means".

                                  – DJ Pirtu
                                  Feb 16 '15 at 7:46











                                • @SevenSidedDie, the NES was approximately 4.6-bit indexed color; clever programming could get 5.75-bit indexed color. The SNES is difficult to quantify, but appears to be 8.6 bit indexed color or 11-bit direct color. The Nintendo 64 and onward use straightforward 24-bit direct color.

                                  – Mark
                                  Sep 24 '15 at 22:23














                                -8












                                -8








                                -8







                                When talking about the the retro gaming 8bit, 16bit, and 64bit. It simply means the amount of pixels used to create the images for example the NES and Sega Mega Drive are very blocky and has large pixels 8bit the SNES and Sega Genesis improve this to "16 bit" and the N64 masters this concept to 64-bit and so on to 128 to 256 and eventually to 1080 HD. Even though it is and was slightly out of context.



                                Nintendo power in the early 90s actually created these "terms" when they released articles about how Nintendo 8bit power was so much better then Sega. Each to their own but anyways they did this because 99% of the people would have no clue what they were actually talking about.






                                share|improve this answer















                                When talking about the the retro gaming 8bit, 16bit, and 64bit. It simply means the amount of pixels used to create the images for example the NES and Sega Mega Drive are very blocky and has large pixels 8bit the SNES and Sega Genesis improve this to "16 bit" and the N64 masters this concept to 64-bit and so on to 128 to 256 and eventually to 1080 HD. Even though it is and was slightly out of context.



                                Nintendo power in the early 90s actually created these "terms" when they released articles about how Nintendo 8bit power was so much better then Sega. Each to their own but anyways they did this because 99% of the people would have no clue what they were actually talking about.







                                share|improve this answer














                                share|improve this answer



                                share|improve this answer








                                edited Feb 16 '15 at 6:42









                                Robotnik

                                27.3k43127227




                                27.3k43127227










                                answered Feb 16 '15 at 4:57









                                the avid nintendo freakthe avid nintendo freak

                                1




                                1








                                • 3





                                  Pixels are nowhere near what 8/16/32/64 or higher bit mean at all.

                                  – Frank
                                  Feb 16 '15 at 5:03






                                • 3





                                  At best they refer to colour palette/depth per pixel; definitely not pixel size. Unless you mean internal storage size, but if so you might want to spend more time on that part explaining just what you mean.

                                  – SevenSidedDie
                                  Feb 16 '15 at 5:07











                                • My old 8-bit Bally system (which had handgun grips for controllers) had a 'pong' game that used more than 8 pixels per paddle. This is positively out in left field.

                                  – Tim Post
                                  Feb 16 '15 at 5:09











                                • The bit count of each generation is certainly tied to the graphical fidelity that could be produced. But but it's by no means what the bit count "means".

                                  – DJ Pirtu
                                  Feb 16 '15 at 7:46











                                • @SevenSidedDie, the NES was approximately 4.6-bit indexed color; clever programming could get 5.75-bit indexed color. The SNES is difficult to quantify, but appears to be 8.6 bit indexed color or 11-bit direct color. The Nintendo 64 and onward use straightforward 24-bit direct color.

                                  – Mark
                                  Sep 24 '15 at 22:23














                                • 3





                                  Pixels are nowhere near what 8/16/32/64 or higher bit mean at all.

                                  – Frank
                                  Feb 16 '15 at 5:03






                                • 3





                                  At best they refer to colour palette/depth per pixel; definitely not pixel size. Unless you mean internal storage size, but if so you might want to spend more time on that part explaining just what you mean.

                                  – SevenSidedDie
                                  Feb 16 '15 at 5:07











                                • My old 8-bit Bally system (which had handgun grips for controllers) had a 'pong' game that used more than 8 pixels per paddle. This is positively out in left field.

                                  – Tim Post
                                  Feb 16 '15 at 5:09











                                • The bit count of each generation is certainly tied to the graphical fidelity that could be produced. But but it's by no means what the bit count "means".

                                  – DJ Pirtu
                                  Feb 16 '15 at 7:46











                                • @SevenSidedDie, the NES was approximately 4.6-bit indexed color; clever programming could get 5.75-bit indexed color. The SNES is difficult to quantify, but appears to be 8.6 bit indexed color or 11-bit direct color. The Nintendo 64 and onward use straightforward 24-bit direct color.

                                  – Mark
                                  Sep 24 '15 at 22:23








                                3




                                3





                                Pixels are nowhere near what 8/16/32/64 or higher bit mean at all.

                                – Frank
                                Feb 16 '15 at 5:03





                                Pixels are nowhere near what 8/16/32/64 or higher bit mean at all.

                                – Frank
                                Feb 16 '15 at 5:03




                                3




                                3





                                At best they refer to colour palette/depth per pixel; definitely not pixel size. Unless you mean internal storage size, but if so you might want to spend more time on that part explaining just what you mean.

                                – SevenSidedDie
                                Feb 16 '15 at 5:07





                                At best they refer to colour palette/depth per pixel; definitely not pixel size. Unless you mean internal storage size, but if so you might want to spend more time on that part explaining just what you mean.

                                – SevenSidedDie
                                Feb 16 '15 at 5:07













                                My old 8-bit Bally system (which had handgun grips for controllers) had a 'pong' game that used more than 8 pixels per paddle. This is positively out in left field.

                                – Tim Post
                                Feb 16 '15 at 5:09





                                My old 8-bit Bally system (which had handgun grips for controllers) had a 'pong' game that used more than 8 pixels per paddle. This is positively out in left field.

                                – Tim Post
                                Feb 16 '15 at 5:09













                                The bit count of each generation is certainly tied to the graphical fidelity that could be produced. But but it's by no means what the bit count "means".

                                – DJ Pirtu
                                Feb 16 '15 at 7:46





                                The bit count of each generation is certainly tied to the graphical fidelity that could be produced. But but it's by no means what the bit count "means".

                                – DJ Pirtu
                                Feb 16 '15 at 7:46













                                @SevenSidedDie, the NES was approximately 4.6-bit indexed color; clever programming could get 5.75-bit indexed color. The SNES is difficult to quantify, but appears to be 8.6 bit indexed color or 11-bit direct color. The Nintendo 64 and onward use straightforward 24-bit direct color.

                                – Mark
                                Sep 24 '15 at 22:23





                                @SevenSidedDie, the NES was approximately 4.6-bit indexed color; clever programming could get 5.75-bit indexed color. The SNES is difficult to quantify, but appears to be 8.6 bit indexed color or 11-bit direct color. The Nintendo 64 and onward use straightforward 24-bit direct color.

                                – Mark
                                Sep 24 '15 at 22:23


















                                draft saved

                                draft discarded




















































                                Thanks for contributing an answer to Arqade!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgaming.stackexchange.com%2fquestions%2f8008%2fwhat-does-8-bit-16-bit-actually-refer-to%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Fluorita

                                Hulsita

                                Península de Txukotka