Unterschied signiertes / unsigned char [duplicate]

Lesezeit: 6 Minuten

Benutzeravatar von Chiggins
Chiggins

Ich weiß also, dass der Unterschied zwischen a signed int und unsigned int ist, dass ein Bit verwendet wird, um anzuzeigen, ob die Zahl positiv oder negativ ist, aber wie gilt dies für a char? Wie kann ein Charakter positiv oder negativ sein?

  • Eine vorherige Antwort zu diesem Thema wird helfen.

    – jarmod

    2. Dezember 2010 um 16:24 Uhr

  • “Ich weiß also, dass der Unterschied zwischen einem vorzeichenbehafteten und einem vorzeichenlosen Int darin besteht, dass ein Bit verwendet wird, um anzuzeigen, ob die Zahl positiv oder negativ ist.” – Beachten Sie, dass dies nur einer der Wege dorthin ist und nicht der gebräuchlichste und praktischste.

    – Christian Rau

    5. Juni 2013 um 16:50 Uhr

AnT steht mit Russlands Benutzer-Avatar
AnT steht zu Russland

Es gibt keinen dedizierten “Zeichentyp” in der C-Sprache. char ist ein Integer-Typ, dasselbe (in dieser Hinsicht) wie int, short und andere Integer-Typen. char ist zufällig der kleinste ganzzahlige Typ. Wie jeder andere ganzzahlige Typ kann er also vorzeichenbehaftet oder vorzeichenlos sein.

Es stimmt, dass (wie der Name schon sagt) char dient hauptsächlich zur Darstellung von Zeichen. Aber Zeichen in C werden durch ihre ganzzahligen “Codes” dargestellt, also ist es nichts Ungewöhnliches, dass ein ganzzahliger Typ char wird verwendet, um diesen Zweck zu erfüllen.

Der einzige allgemeine Unterschied zwischen char und andere Integer-Typen ist das einfach char ist nicht gleichbedeutend mit signed charwährend bei anderen Integer-Typen die signed Modifikator ist optional/impliziert.

  • In Ordnung, hoffentlich kannst du hier bei mir bleiben, ich bin nicht so toll mit Bits / C und so. Mit einem signierten Zeichen könnte also so etwas wie “01011011” ein Zeichen darstellen?

    – Chiggins

    2. Dezember 2010 um 16:34 Uhr

  • @Chiggins: Ist es binär? Wenn ja, dann ja. Dein 01011011 ist eine binäre Darstellung von 91. Es repräsentiert also jedes Zeichen mit Code 91 auf Ihrer Plattform ([ on PC, for example).

    – AnT stands with Russia

    Dec 2, 2010 at 17:40

  • For a simple proof of the nature of chars being ints try applying swtich...case, which can be applied only to integral numeric values.

    – rbaleksandar

    Sep 10, 2015 at 10:39

  • C89 6.1.2.5 “There are three character types, designated as char , signed char, and unsigned char.” C11 6.2.5p15 “The three types char, signed char, and unsigned char are collectively called the character types.” 6.2.5fn45 “char is a separate type from the other two and is not compatible with either”

    – Cubbi

    May 7, 2016 at 4:17

Simple Fellow's user avatar
Simple Fellow

I slightly disagree with the above. The unsigned char simply means: Use the most significant bit instead of treating it as a bit flag for +/- sign when performing arithmetic operations.

It makes significance if you use char as a number for instance:

typedef char BYTE1;
typedef unsigned char BYTE2;

BYTE1 a;
BYTE2 b;

For variable a, only 7 bits are available and its range is (-127 to 127) = (+/-)2^7 -1.
For variable b all 8 bits are available and the range is 0 to 255 (2^8 -1).

If you use char as character, “unsigned” is completely ignored by the compiler just as comments are removed from your program.

  • I think you did a mistake (correct me if I’m wrong): “a” is signed char so the range is -128 to 127 and “b” is unsigned char so the range is 0 to 255. similar question

    – dalf

    Feb 6, 2014 at 2:22


  • You need to change this answer to reflect that signed integers use twos complement and not a sign bit like you say, because as it stands this answer is incorrect.

    – Matthew Mitchell

    Nov 14, 2014 at 15:47

  • This is incorrect. In C, signed integer types use two’s complement, with range -(2^n-1) to (2^n-1)-1, where n is the number of bits and 0 is counted once, not twice. By default, a char is unsigned, not signed. Please correct this; it is a simple but incorrect explanation.

    – wizzwizz4

    Feb 15, 2016 at 11:52


  • wizzwizz4: AFAIK if char is unsigned or signed by default is defined by the implementation not by the standard (e.g. see stackoverflow.com/a/2054941/138526)

    – Felix Schwarz

    Feb 14, 2017 at 15:04

  • @wizzwizz4 The C standard does not define the encoding format for signed integers. It is up to the compiler designer to choose between 2s complement, 1s complement and sign & magnitude.

    – Fernando

    Mar 23, 2017 at 14:22

There are three char types: (plain) char, signed char and unsigned char. Any char is usually an 8-bit integer* and in that sense, a signed and unsigned char have a useful meaning (generally equivalent to uint8_t and int8_t). When used as a character in the sense of text, use a char (also referred to as a plain char). This is typically a signed char but can be implemented either way by the compiler.

* Technically, a char can be any size as long as sizeof(char) is 1, but it is usually an 8-bit integer.

  • “There are three char types” – That only applies to C++.

    – Martin

    Jun 5, 2014 at 1:13

  • @Martin It applies to C in even greater degree than to C++

    – Cubbi

    May 7, 2016 at 4:13

Representation is the same, the meaning is different. e.g, 0xFF, it both represented as “FF”. When it is treated as “char”, it is negative number -1; but it is 255 as unsigned. When it comes to bit shifting, it is a big difference since the sign bit is not shifted. e.g, if you shift 255 right 1 bit, it will get 127; shifting “-1” right will be no effect.

supercat's user avatar
supercat

A signed char is a signed value which is typically smaller than, and is guaranteed not to be bigger than, a short. An unsigned char is an unsigned value which is typically smaller than, and is guaranteed not to be bigger than, a short. A type char without a signed or unsigned qualifier may behave as either a signed or unsigned char; this is usually implementation-defined, but there are a couple of cases where it is not:

  1. If, in the target platform’s character set, any of the characters required by standard C would map to a code higher than the maximum `signed char`, then `char` must be unsigned.
  2. If `char` and `short` are the same size, then `char` must be signed.

Part of the reason there are two dialects of “C” (those where char is signed, and those where it is unsigned) is that there are some implementations where char must be unsigned, and others where it must be signed.

Stuart Golodetz's user avatar
Stuart Golodetz

The same way — e.g. if you have an 8-bit char, 7 bits can be used for magnitude and 1 for sign. So an unsigned char might range from 0 to 255, whilst a signed char might range from -128 to 127 (for example).

Jack's user avatar
Jack

This because a char is stored at all effects as a 8-bit number. Speaking about a negative or positive char doesn’t make sense if you consider it an ASCII code (which can be just signed*) but makes sense if you use that char to store a number, which could be in range 0-255 or in -128..127 according to the 2-complement representation.

*: it can be also unsigned, it actually depends on the implementation I think, in that case you will have access to extended ASCII charset provided by the encoding used

1424920cookie-checkUnterschied signiertes / unsigned char [duplicate]

This website is using cookies to improve the user-friendliness. You agree by using the website further.

Privacy policy