Intel Galileo

Status
Not open for further replies.
According to the Adafruit suite, the neopixels need 800 Khz (the earlier version based ont he WS2811 need 400 Khz), and looking at the code, it disables interrupts before starting to update all of the leds (which are done in a serial fashion), and interrupts are enabled after all of the pixels have been updated.

The code uses assembly language for the AVR to minimize any timing issues. They use different asm code for 8Mhz and 16Mhz AVRs and for the older 400 Khz WS2811 vs. the 800 Khz WS2812s.

For the Arm case, Paul has contributed special code for the Teensy 3.0, and if it isn't Teensy, it assumes it is a Due. Unlike the AVR, it looks like it is mostly C code for the arm.

Now presumably the native cpu on the Galileo is fast enough, but the problem is ALL of the GPIO's are done via a Cypress i2c expander, and the Cypress i2c port expander that does all of the I/O can only run at 100 kHz (the Galileo processor itself can also run i2c at 400 kHz). But even at 400 kHz, it is not fast enough, since each i2c transaction presumably takes a few bytes over the wire. The neopixel documentation says that other Linux systems on a chip like Rasberry Pi, Beaglebone Black, pcDunio, etc. have similar limitations, and that you really need a dedicated 8Mhz or faster microprocessor.
The Beaglebone Black CPU has some rather nice I/O coprocessors running at 200MHz - so it basically has that dedicated microprocessor built in.

Just a 'simple' matter of writing a driver (those I/O coprocessors are only programmable with their own assembly language)...
 
I view this as a Hail Mary type pass by Intel, where they see the current collegians are flocking to Arm on the high end, and not x86 boards). I don't know the numbers, but I suspect there are more Arm shipments than x86, and in fact you see people moving to using tablets (almost always arm based) over traditional laptops.

Intel dominates the desktop PC market and AMD is basically there to prevent them from being regulated as a monopoly. This market is currently in slow decline. The (comparatively quickly-growing) smartphone and tablet market has rallied around ARM chips due to low cost, low-power consumption, and acceptable performance. If Intel wants to keep growing, it needs to penetrate that market and Atom chips (though a step up from the past heating appliances masquerading as CPUs) didn't make the cut. So Intel tries again, this time with a new processor (Quark) that may be competitive re: ARM in terms of cost, performance, etc. Whether or not this architecture will catch on is a completely different question. At the very least, success here would not only allow Intel to further spread the considerable costs of developing new CPU architectures and fab processes, it could also enable it to further lock folk into the x86 architecture.

I doubt that the broader market would pursue Intel at this time as a CPU core for tables, phablets, and smartphones. As long as ARM and its many contributors/manufacturers/clients continue to push the ARM architecture forward (i.e. see the 64-bit A7), Intel is put in a hard negotiating position. It is entirely possible that Mac Users would still be enjoying PowerPC processors today if the PowerPC consortium had been more successful at pushing CPU speeds/capabilities forward instead of stagnating. However, that was not to be and Apple subsequently took a gamble converting from PowerPC to x86 just as it had from 68000 to PowerPC a few years before that.

But if Intel has viable competitor to ARM, and if the day comes that ARM fails to deliver on its promises, a viable market alternative to ARM will exist and Intel will profit. It's a long-term play, but Intel has deep pockets, and hence can afford to bide its time while picking off clients one by one. In the meantime, consumers like you and me benefit from the competition in the MCU market leading to manufacturers competing for our business and making ever more powerful chips available at lower and lower prices. Given the industry structure around the ARM platform, this may in fact be a more viable and stable long-term business plan than the vertically-integrated approach that Intel took. For all I know, Intel has even considered splitting itself along chip design and fab into separate companies that would allow Intel fabs a slice of the ARM pie.

I do wonder at Mouser buying 10k Galileo units, i.e. how many of them will be returned to Intel at some point in the future? How will they be written off... as a marketing expense?
 
Last edited:
Some reports state that the desktop market is declining much more than a slow decline.

In terms of Intel's fabs producing Arm chips, they used to, but this year Intel announced closing of the Hudson, Massachusetts fab that they had gotten from Digital Equipment Corporation, and had been producing Arm chips.
 
Tangent on Intel/AMD

My professional days must be different than most others'.
I would not be using a tablet much.

After work.. via the tablet: lots. For fun stuff. But pennies vs. what I buy/do in the work day. When I can wrench it away from my wife.
But too, I use the desktop more than the tablet for personal stuff, like personal finance, etc. Stuff that matters.

I can't help but believe that the rush to compete with Apple is executives' pride and ego run amok. The $$$ isn't in social networking, except for a few non-manufacturers.
 
Last edited:
Mouser pricing for the Intel Galileo was $60, changed to $64 and now with 4 weeks away $69.
Its nose bleed territory at $69.:(

For $69 - I could buy

3 Teensy 3 or
2 Raspberry Pi's (model B) or
4 Seeeduino Lites (Arduino compatible) or
1 Digilent chipKIT WF32 or
1 TFT Maximite computer kit
 
Last edited:
With all those parts on a fairly large board that must be at least 6 layers, I'm pretty amazed it could even sell for "only" $69.
 
The profit or loss on these is probably small compared to Intel's budget for paper clips.

From an Intel investor's view (I sold all of my shares recently), Intel is in danger of being a long-lived but one-trick pony, as ARMians becomes omnipresent. Maybe Intel will buy/merge with some biggie and change course in the cloud-computing future.

We all owe a lot to AMD, as what would CPUs cost if Intel had a full monopoly?
I've bought mostly AMD for 15 years or so, just to add my little vote for fair and open competition.
 
Last edited:
The profit or loss on these is probably small compared to Intel's budget for paper clips.
We all owe a lot to AMD, as what would CPUs cost if Intel had a full monopoly?
I've bought mostly AMD for 15 years or so, just to add my little vote for fair and open competition.

Though speaking as an ex-AMDer, AMD has had mighty hard times for the last 4 years.
 
We all owe a lot to AMD, as what would CPUs cost if Intel had a full monopoly?

Intel would likely be in the same place that MS is with Windows/Office. A company that likely would have benefitted from a breakup vs. the ongoing mess that Ballmer and Co. has been trying to reinvent for the last 10 years. A stock that has largely gone sideways since 2001 because market dominance usually leads to crummy performance. See Quicken/Intuit as a further/similar example. Competition is not only good because the consumer benefits, it's good because it drives the company to deliver a better product that customers then want to buy.

In the case of MS, I find it hilarious how badly marketing has been allowed to subvert what was a pretty good office product. All editions got ribbons, even as the user base howled due to the loss of screen real estate, the hiding of critical buttons in obscure locations, etc. All for eye candy. At least Excel 2007 progressed somewhat by becoming multi-core aware/capable, getting a better cell name editor, and numerous mathematical upgrades to make code execution faster. But Powerpoint still does not have a proper review capability, a feature Word has enjoyed since 1990 (!!!). So yes, competition is good because the drive for differentiation will lead to more features that are relevant to the user base, not eye candy papering over a moribund product strategy.

AMD at least kept Intel somewhat honest. Over the years, AMD did a great job of challenging Intel with really nifty processor upgrades. But like Motorola, production had a hard time delivering. I joked in the 90's that Intel strategically hired away all of Motorola's good manufacturing engineers, hence the dearth of upgrades to PowerPC 604 line of chips. Motorola at one point even tried to claim that the 604 chips it was trying to make were not manufacturable, prompting IBM to offer to make them in Poughkeepsie instead. When IBM had no issues with making those 604's, Motorolas bluff was called.

Anyhow, I believe that Intel still has a lot of potential as more folk on the planet want to get a PC to supplement their iPads, phablets, smartphones, whatnot with a 'real' computer. But, unless Intel can convince more folk to transcode video or otherwise engage in heavy lifting (CPU-loadwise) on a regular basis, the margin for customer experience improvement is shrinking.
 
Last edited:
Anyone know if the 400MHz x86 based CPU in the Galileo would be equal/faster at applications (not MIPS) than, say, an 800-1000MHz ARM 8?
CISC vs. RISC, though some of the ARM instructions are quite CISC-like.

if you're doing display based work, I suppose the user experience is the goodness of the graphics hardware an X-windows drivers more than CPU speed.
 
Last edited:
Hopefully when more people gets there hands on Galileo boards we'll start to see some benchmarks or comparisons between Galileo and Beaglebone Black and Raspberry Pi.

In theory, Arduino will be shipping "Tre" early next year, which looks like a Leonardo and Beaglebone Black on the same board?
 
The beaglebone black is certainly nifty. Could make a very nice little embedded solution for data capture and storage + web server for same without using a general-purpose CPU.

The Tre is IMO what Arduino should have released instead of the Due. It's a much better solution re: hardware compatibility than the Galileo. The only negative is a SMD-based 328P, meaning that smoked components are that much harder to replace. However, the board as is makes a much better transition piece to 3.3V ARM processors than the Due.
 
That is truly unfortunate to cripple GPIO like that. At least it does have full speed I2C and SPI:
I2C bus, TWI: SDA and SCL pins that are near to the AREF pin.
TWI: A4 or SDA pin and A5 or SCL pin. Support TWI communication using the Wire library.
SPI: Defaults to 4MHz to support Arduino Uno shields. sing the board.
 
I thought the Tre has a 32u4?

As usual, you are absolutely correct - it is a Atmel 32u4. Makes sense too, since that allows you to leave all non-USB pins as is, and simply use D+ and D- to communicate with the Cortex-a8 processor. However, this SMD limitation means IMO that this board is really not for experimentation by novices.

It's too bad that they couldn't offer a Tre with a teensy 2 board as it's centerpiece. That would allow you to independently get everything to work on the Teensy and then transition to the ARM board. Of course, this would require either a plug for the USB signal (ideally with a isolator) or exposed D+/D- pins.
 
In theory, Arduino will be shipping "Tre" early next year, which looks like a Leonardo and Beaglebone Black on the same board?

The main interesting thing about Arduino 3 (Tre) is that it affirms the link with Arduino 1 (Uno) and essentially deprecates Arduino 2 (Due). Arduino has stalled at Uno/Leonardo due to crap libraries and the failure of the Mega/Due larger sheild pinout. Thus, all the action will be in coprocessors, a big tail wagging a small dog. Due will be shunted off to a sideline and may never exit beta.

Ffffft I was going to stay out of this thread.
 
The main interesting thing about Arduino 3 (Tre) is that it affirms the link with Arduino 1 (Uno) and essentially deprecates Arduino 2 (Due). Arduino has stalled at Uno/Leonardo due to crap libraries and the failure of the Mega/Due larger sheild pinout. Thus, all the action will be in coprocessors, a big tail wagging a small dog. Due will be shunted off to a sideline and may never exit beta.

My thoughts exactly. When the Arduino team decided to go 3.3V with an 144-pin ARM, they should have come up with a new pin form factor that can handle all of those pins and the Tre is an acknowledgement thereof. The traditional 5V-based Arduino shield form factor that made so many shields work should not be messed with, all it does is create confusion and leads to magic smoke releases. I expect the next ARM release in the Arduino series to just use the left side (i.e. ARM side) of the Tre board and omit the 5V AVR transition module on the right. They should call it the Due v2, i.e. the Due that should have been, not the Due that was released.

Paul chose a form factor that bread boarders have been able to use for decades, avoiding the obvious pin voltage compatibility issues that the Due ran into headfirst. While Paul only lost one pin in the battle of accommodating all the pins that his Teensy 3 MCU uses, the Due lost a lot more, and some of these losses are debilitating for the very enthusiasts / cutting edge users that the Due was allegedly targeting. I've said it before, I'll say it again, IMO Pauls excellent kickstarter campaign for the Teensy 3 forced the hand of the Arduino team to release the Due before it was fully developed.

Ultimately, the transition to 3.3V makes a lot of sense and if one goes there, one might as well take advantage of more modern 32-bit CPUs that offer many new and welcome features. Most sensor systems are 3.3V compatible, and SD cards, LCD's, TFT's, nrF modules, XBees, etc. all make use of 3.3V logic signals. So why not standardize around 3.3V and 32-bits for the micros of the near-term future?
 
Last edited:
I was a bit surprised to see Tre using an 8 bit AVR. It does seem like they're abandoning Due's effort to move the SAM3. But it is a lot of difficult work to make a 32 bit ARM board compatible with so much code designed for AVR.
 
I was a bit surprised to see Tre using an 8 bit AVR. It does seem like they're abandoning Due's effort to move the SAM3. But it is a lot of difficult work to make a 32 bit ARM board compatible with so much code designed for AVR.
I've been wondering about that. I don't have a Due, so I don't follow it. But given the recent machines have gone back to AVR and 5v, maybe it is more of concentrate on what people are actually buying.

Given the Due clone DigiX (from the digispark people) is in theory about to ship (*), perhaps the 1,229 backers of it will breathe new life into the platform.

(*) though, they did announce that they are having factory problems, and it will take longer.
 
note re: GPIO speed comparison

On another note ... Since both the PI and Galileo GPIO is extremely slow - a very enterprising individual(s)
could turn the Teensy 3 into a "serial slave GPIO" by using a serial port from the Galileo...
FWIW... the R-Pi GPIO can bit-bang a squarewave of 22 MHz (you can change I/O state every 22.5 nanoseconds, and the output square wave period is 45 ns) according to my oscilloscope measurement, using this code:
Code:
  while (1)  {    GPIO_SET = 1<<7;    GPIO_CLR = 1<<7;  }  // toggle R-Pi GPIO Port 1 Pin 7 at maximum rate
in the overall structure of the R-Pi wiki GPIO-in-C example. Note that I compiled with gcc -O3, if you don't it is slightly slower.

Now of course running linux, you do have occasional gaps in your square wave due to multitasking, unless you take some steps (eg. turn off interrupts.) Anyway it just seems unfair to paint R-Pi and Galileo GPIO speed with the same brush when the difference is 5 orders of magnitude (update interval = 22 nsec vs 2 msec).
 
Last edited:
Now of course running linux, you do have occasional gaps in your square wave due to multitasking, unless you take some steps (eg. turn off interrupts.) Anyway it just seems unfair to paint R-Pi and Galileo GPIO speed with the same brush when the difference is 5 orders of magnitude (update interval = 22 nsec vs 2 msec).
I wonder when all of the stuff needed for virtual machines will make it to the r-pi, bbb, pcdunio class machines, so you could have one VM doing the bit-banging, and the other doing the general linux stuff. I've seen announcements for some phones have this ability (mostly to allow a business phone that is fully protected and a home phone that is wide open).
 
The truth is, even on Teensy & Arduino boards, a tight loop bit-banging a fast waveform can still be interrupted. The millis() interrupt runs 1000 times per second on Teensy3, and usually 976.56 times per seconds on AVR-based boards. As you use more feature and libraries, other interrupt come into play.

Of course, Linux will context switch between other user level programs.
 
Status
Not open for further replies.
Back
Top