uint32_t type

Q&A's, tips, howto's
Locked
csfreebird
Posts: 107
Joined: Tue Jan 15, 2013 11:54 am
Location: China, Beijing
Contact:

uint32_t type

Post by csfreebird »

How to define a variable which contains one uint32_t integer?
Any example?

Lutz
Posts: 5289
Joined: Thu Sep 26, 2002 4:45 pm
Location: Pasadena, California
Contact:

Re: uint32_t type

Post by Lutz »

By default integers in newLISP are 64 bits, but when interfacing with external C-libraries than precision is truncated automatically to 32, 16, or 8 bit, if asked for by the imported function.

When reading or writing to files or memory use pack and unpack to format an integer to the exact number of bits (8,16,32,64) required, signed or unsigned. Then there are also the functions get-char, get-int and get-long to get 8,32 and 64 bit sized integers from a memory address. Once any integer smaller than 64-bit is held by a variable, it would be able to grow to a signed 64-bit number before it overflows when adding to it.

csfreebird
Posts: 107
Joined: Tue Jan 15, 2013 11:54 am
Location: China, Beijing
Contact:

Re: uint32_t type

Post by csfreebird »

I used pack with unpack for this.

Code: Select all

(define (u4 v)
(first (unpack "lu" (pack "lu" v))))
Here is some examples for calling u4 function:

Code: Select all

> (define (u4 v)(first (unpack "lu" (pack "lu" v))))
(lambda (v) (first (unpack "lu" (pack "lu" v))))
> (u4 8)
8
> (u4 -12)
4294967284
>
But, it's not convenient, and inefficient.

Lutz
Posts: 5289
Joined: Thu Sep 26, 2002 4:45 pm
Location: Pasadena, California
Contact:

Re: uint32_t type

Post by Lutz »

You could use bit masks:

Code: Select all

> (& 0xFFFFFFFF -12)
4294967284
>

csfreebird
Posts: 107
Joined: Tue Jan 15, 2013 11:54 am
Location: China, Beijing
Contact:

Re: uint32_t type

Post by csfreebird »

it's interesting, why it doesn't change the value, but makes it look like one unsigned integer instead of signed integer?

Lutz
Posts: 5289
Joined: Thu Sep 26, 2002 4:45 pm
Location: Pasadena, California
Contact:

Re: uint32_t type

Post by Lutz »

On current computers and OSs numbers negative numbers are encoded as two's complement:

Code: Select all

> (bits -12)
"1111111111111111111111111111111111111111111111111111111111110100"
> (bits (& 0xFFFFFFFF -12))
"11111111111111111111111111110100"
>
> 0b1111111111111111111111111111111111111111111111111111111111110100
-12
> 0b11111111111111111111111111110100
4294967284
>
> 0b0000000000000000000000000000000011111111111111111111111111110100
4294967284
>
http://en.wikipedia.org/wiki/Two%27s_complement

The above would be the 32-bit representation of -12 in a 32-bit integer field. As newLISP handles all integers in 64-bit fields, it fills all 32 higher bits with 1's. The bit mask 0xFFFFFFFF then masks them out again filling all 32 high bits with 0's making a 64-bit 4294967284, but if you look only at the 32 lower bits, you have -12 again.

In order to help you, it would be useful if you tell us what you are trying to do from an application point of view. Why do you want to convert 64-bit integers to 32-bit integers? What is your application? Are you interfacing with external libraries? Are you writing out binary data to a file?

csfreebird
Posts: 107
Joined: Tue Jan 15, 2013 11:54 am
Location: China, Beijing
Contact:

Re: uint32_t type

Post by csfreebird »

My newlisp app is used for simulating 10,000 devices which connect to my TCP server.
I am doing some parallel testing recently.
We transfer 16bit or 32bit integer with big-endian order via TCP.
The network traffic means cost for our business. :)

Locked