Help on how to make digital inputs into Joystick position

Status
Not open for further replies.

damaxx

New member
Looking to use 10 digital inputs to give an analog value for a the X axis of a game controller.
So rather than a true analog input, it will be 10 or 11 stepped positions from the digital inputs.


The digital truth table looks like this -

0 1 2 3 4 5 6 7 8 9 Analog
1 0 0 0 0 0 0 0 0 0 1023
1 1 0 0 0 0 0 0 0 0 921
0 1 1 0 0 0 0 0 0 0 819
0 0 1 1 0 0 0 0 0 0 717
0 0 0 1 1 0 0 0 0 0 615
0 0 0 0 1 1 0 0 0 0 513
0 0 0 0 0 1 1 0 0 0 411
0 0 0 0 0 0 1 1 0 0 309
0 0 0 0 0 0 0 1 1 0 207
0 0 0 0 0 0 0 0 1 1 105
0 0 0 0 0 0 0 0 0 1 0

So pretty straight forward.
I just need to put that value into Joystick.X(value); and I am unsure of the cleanest way to do it.

Very new to coding so I would appreciate fairly basic answers if possible although any help would be appreciated.
 
looks like a grey code of sorts- meaning only one bit changes value from adjacent rows in that truth table.
If you're not counting microseconds speeds, and don't want to do a table-lookup... then I'd just code a C case statement

Code:
// ordinary C code:
int n, tenBitValue;
// get input into: tenBitValue, then 

switch (tenBitValue)  {
[INDENT]case 0x200: n = 1023; break;
case 0x300: n = 921; break;
case 0x180: n = 819; break;
etc
case 0x003: n = 105; break;
case 0x001: n = 0; break;
default: n = -1; [/INDENT]
}
// now use the value stored in variable: n (if n is not -1)

This code assumes the truth table and bits of I/O come in as listed in the table. NOTE: the bit numbers in the truth table's column headings are reversed from computer conventions, where the least significant bit is number 0, next is number 1, etc. I think of the bit numbers as the exponent of the powers of two, where bit 0 is 2 to the 0th = 1, bit 1 is two to the first = 2, etc.

the numbers above, like 0x300, are hexadecimal. 0x100 = 512 decimal (two to the 8th power). Each digit in hexadecimal is 0 to 9 or A to F, standing for the values 0 to 15 decimal. Hexadecimal is just groups of four binary bits, where four bits span 0 to 15 decimal. Your table has binary bits (0 or 1). Some compilers support the non-standard notation of 0b11001101 etc where the '0b' means the rest of the digits are binary as is your table. But not good to use '0b' as it's non-standard for C compilers.
 
Last edited:
Thanks so much for that reply Steve. ;) That makes total sense. I appreciate the detailed explanation.
 
Status
Not open for further replies.
Back
Top