K66 Beta Test

Status
Not open for further replies.
@Paul, ethernet shield arrives. I do badly on multiple choice. What is the proper orientation of shield to K66?
 
@Paul, if the third Ethernet shield is still available, and (only if) no one else wants it: I know now, that i'll have some time to test it.
 
Last edited:
@Paul, ethernet shield arrives. I do badly on multiple choice. What is the proper orientation of shield to K66?

DOA? I plugged in shield on GND/3.3v end of K66 with ether cable to RJ45 ( confirmed cable etc is good with test with mbed k64).
I get no LEDs on shield. K66 is running but your sketch reports no PHY id

Code:
192.168.1.67
enetbufferdesc_t size = 32
rx_ring size = 384
MDIO PHY ID2 (LAN8720A should be 0007): 0
MDIO PHY ID3 (LAN8720A should be C0F?): 0
ENET_PALR 0x4E9E500
ENET_PAUR 0x8808
ENET_EIR 0x0
ENET_EIMR 0x0
ENET_ECR 0xF0000112
ENET_MSCR 0x1E
ENET_MRBR 0x200
ENET_RCR 0x45F2D10C
ENET_TCR 0x104
ENET_TACC 0x1
ENET_RACC 0x80
ENET_MMFR 0x600C0000
4.72v on Vin, 3.265 v on each of the 3.3v pins on the shield
running 1.6.9 with 1.29beta4 on ubuntu (32-bit)

thoughts?
 
Last edited:
Paul, as a note.
For DYN_SWI lib, used mainly for UHS 3.0 I'll be commandeering IRQ 30 for 3.4 and 3.5 teensy boards.
This IRQ seems to be unused, and I will be testing this tonight.
If everything works out OK, I should be able to also make the native host port work too, but that may have to wait 'till next weekend.

Update:
Additional note on SPI lib for teensy 3.x

/root/Arduino/libraries/UHS_host/USB_HOST_SHIELD/USB_HOST_SHIELD.h:33:2: warning: #warning "Your SPI library installation lacks 'SPI_ATOMIC_VERSION'. Please complain to the maintainer." [-Wcpp]
#warning "Your SPI library installation lacks 'SPI_ATOMIC_VERSION'. Please complain to the maintainer."

-- consider this a complaint ;-)
 
Last edited:
The ethernet shield fits with the PCB covering both 24 pin sockets.

Each side of the shield has 20 pins, not 24. Orient the 20 pins onto right most 40 of 48 pins. That's pins 3-12+3.3V+24-32 on the bottom row and pins 22-13+GND+DAC1+DAC0+39-33 on the top row. When oriented correctly, the PCB should exactly cover the two sockets, not overhanging either end of the sockets. The RJ45 jack should be above the 8 left-most pins (GND+0-2 and VIN+AGND+3.3V+23). The RJ45 jack does NOT overhang the end of the sockets.

There's no connection to the VIN & GND pins on the left side. The shield gets its power from the 3.3V pin that's between pins 11 and 24, and ground from the GND pin between pins 13 and DAC1.
 
The ethernet shield fits with the PCB covering both 24 pin sockets.

Each side of the shield has 20 pins, not 24. Orient the 20 pins onto right most 40 of 48 pins. That's pins 3-12+3.3V+24-32 on the bottom row and pins 22-13+GND+DAC1+DAC0+39-33 on the top row. When oriented correctly, the PCB should exactly cover the two sockets, not overhanging either end of the sockets. The RJ45 jack should be above the 8 left-most pins (GND+0-2 and VIN+AGND+3.3V+23). The RJ45 jack does NOT overhang the end of the sockets.

There's no connection to the VIN & GND pins on the left side. The shield gets its power from the 3.3V pin that's between pins 11 and 24, and ground from the GND pin between pins 13 and DAC1.

Well, those instructions were more explicit than your email -- luckily, I didn't seem to have smoked anything.
board is running. LEDs on, ARP, and ICMP working. :eek:
 
I'd be interested to know if you run into problems with any multibus testing you do. I expect this will be a common usage scenario on T3.5. Regarding the slave problems, that is specific to LC and T3.5 in slave mode. Normal off-the-shelf slaves should work without problems.
Quick question, which driver/library should I use to try out the SSD1306 I2C OLED display?

Note: I still need to download your latest stuff.

Was also wondering in the i2c_t3.cpp file, if this define is correct?
Code:
#define PIN_CONFIG_ALT(name,alt) uint32_t name = (pullup == I2C_PULLUP_EXT) ? (PORT_PCR_MUX(alt)|PORT_PCR_ODE|PORT_PCR_SRE|PORT_PCR_DSE) \
                                                                            : (PORT_PCR_MUX(2)|PORT_PCR_PE|PORT_PCR_PS)
My gut tells me that the send line the PORT_PCR_MUX(2) should be PORT_PCR_MUX(alt)

Kurt
 
@KurtE: I've used this for the 1306: ...\hardware\teensy\avr\libraries\Adafruit_SSD1306\examples\ssd1306_128x64_i2c

> Bypass the sketch line_26 "OLED_RESET 4" and use display(-1) or pin 4 is wasted and Wire2 won't work!

You also need to modify the .h file to select the 128x64 - but the compiler tells you that.
 
Thanks, the other issue I ran into is my display came with address 3C and code is setup for 3D... Now the display is working
 
Thanks, the other issue I ran into is my display came with address 3C and code is setup for 3D... Now the display is working
Yes - it is 3D - all of mine were 3C as well - I wonder if that was the one on the 128x32 devices? Of course the silkscreen shows an ox7? address.
The other simple changes to get it off of wire are in the comments of this Adafruit_SSD1306/pull/1. Would be cool to make those edits in the PJRC source tree?
 
ethernet shield testing on K66/K64 sticky post (i'll continue to update this post with new data ...)

running 1.6.9 with 1.29beta4 on ubuntu (32-bit), lwIP 1.4.0
K66 beta PROTO6 board (and K64 beta, 8/31/16)
ether shield with PHY LAN8720A, RJ45, two LEDs, 3.3v/gnd, and 12 K66 pins (8 required): 3,4,24-28,39, (16-19)
k66e.jpg

Raw ethernet tests:

Configuration summary:
Code:
F_CPU 120000000
192.168.1.17
enetbufferdesc_t size = 32
rx_ring size = 384
buffer size 1520
RX buffers 12
TX buffers 10
MDIO PHY ID2 (LAN8720A should be 0007): 7
MDIO PHY ID3 (LAN8720A should be C0F?): C0F1
PHY control reg 0x3100   100mbs, auto negotiate, full duplex
PHY status reg 0x7829
PHY reg 17 0x2
MPU_RGDAAC0 0x37DF7DF
SIM_SCGC2 0x1
SIM_SOPT2 0x3D10C0
ENET_PALR 0x4E9E500
ENET_PAUR 0x18808
ENET_EIR 0x0
ENET_EIMR 0x0
ENET_ECR 0xF0000112
ENET_MSCR 0x1E
ENET_MRBR 0x5F0
ENET_RCR 0x45F2D104
ENET_TCR 0x104
ENET_TACC 0x1
ENET_RACC 0x80
ENET_MMFR 0x60023100

Running extensions to Paul's original raw ethernet sketch. My etherraw.ino sketch, a monolithic menagerie of various low-level tests with hand-crafted packets.
  • ARP request(broadcast), reply, respond OK, (192.168.1.17) at 04:e9:e5:00:00:01. request/response time 115 us. could build ARP table
  • ICMP/ping reply OK. ping time from linux host: rtt min/avg/max/mdev = 0.125/0.129/0.132/0.011 ms (print's disabled in sketch)
  • UDP receive test: linux box sends 20 1000-byte UDP packets as fast at it can. k66 receives all in 1620us (bout 98 mbs), receiver clock is started when first packet arrives.
  • UDP blast : k66 sends 20 1000-byte packets (with sequence numbers) to UDP sink program on linux, linux measured 96 mbs.
  • UDP echo reply, RTT avrg 0.000142 min 0.000090 max 0.000483 seconds for 8-byte pkt, initiated from linux host. and with simple K66 sendto/recvfrom echo'd back from linux: min 102 max 230 avrg 119 us
  • UDP NTP query using sendto/recvfrom
  • total run power: no shield, just K66@120MHz beta board 50 ma; shield no LEDs, not running? 71ma; shield running (2 LEDs) 155 ma. LAN8720A PHY spec is 100 ma, with power-down command option (4+ ma).
    power measured through hacked USB cable. (also see K64 ether power)

    Turn off power to PHY with
    mdio_write(0,0,0); // auto negotiate off
    mdio_write(0,0,0x0800); // power down​
    total power drops to 56 ma. (shield LEDs off)
  • PHY access seems to work with TA(2) or TA(0), ref says TA(2)
  • PROMiscuous mode is set? doesn't seem to work without it ? FIX: sketch was setting PAUR incorrectly, should be ENET_PAUR = ((MACADDR2 << 16) & 0xFFFF0000) | 0x8808;, then you can disable PROM in RCR
  • enable hardware checksum insertion in TACC and TX descriptors. you MUST zero outgoing packet's IP and UDP/TCP/ICMP checksum fields. ? first IP header checksum bad for UDP blast. pkts > 198 bytes have bad checksums (0) ?? TODO :confused:
  • since hardware checksum is flakey, added software checksums to sketch, setting UDP checksum field to 0 skips calculation
  • enabled RX and TX interrupts,, ENET_EIMR 0xA00000, just counting for now
  • modified output() to keep trying til ring output buffer is available
  • etherraw sketch works with female headers on K64 beta (teensy 3.5) 8/31/16
Code:
void udp_ntp(int reps, int ms) {
  int i, sport, t;
  uint32_t  secs;
  IPAddress sender;
  uint8_t buff[48] __attribute__ ((aligned(4)));

  UDP_lth=0;
  for (i=0;i<reps; i++) {
    buff[0] = 0x1b;   // ntp query
    sendto(buff,sizeof(buff),4444,manitou,123);
    while(UDP_lth==0) check_rx();  // poll ether ring
    recvfrom(buff,sizeof(buff), 4444, &sender, &sport);
    secs = *(uint32_t *) (buff+40);
    Serial.print("ntp "); Serial.println(swap4(secs));
    t=millis();
    while(millis() -t < ms) check_rx();   // active delay
  }
}
  • To simulate TCP performance, a UDP transmit function was configured like lwIP (MSS=1460, max window 2*MSS) and uses a "slow start" of a full window of data, sending 1460-byte packets. The window size and RTT latency will determine TCP bulk transfer rates. With a simple linux UDP "ack" program (no delayed ACK), our K66 TCP-like transfer rate was 58 million bits/second (mbs) on the home wired-Ethernet. Since the K66 hardware checksums are flakey, we are calculating IP header checksum in software, and using 0 for UDP checksum.

    We include a micros() timestamp in each packet to measure RTT.
    RTT stats: 1000 pkts min 349.000000 max 6142.000000 avrg 408.313000 (microseconds)
    So without the slow-start blast, the data rate would have been 8*1460/349=33 mbs. Increasing the window to 4*MSS increased the throughput from 58 to 85 mbs. The RTT jitter is caused by other traffic (broadcasts) on the home net, or the linux host having something better to do.

Benchmarks: the table below compares lwIP on mbed with K66 to date.

Code:
                   K66@120mhz    K66   mbed K64@120mhz   mbed LPC1768@96mhz
                    raw ether   lwIP     lwIP+RTOS         lwIP+RTOS
UDP latency(us)       142        183        288               292
UDP send (mbs)         96         85         52                40
UDP recv (mbs)         98         67          4                 2

TCP send (mbs)         58*        59         26                25
TCP recv (mbs)                    51         21                19

- UDP latency RTT for 8-byte payload
- UDP send: blast 20 1000-byte packets, rate measured at receiver
- UDP recv: rate limit linux sends til MCU receives 20 1000-byte pkts, no losses
- the poor lwIP-RTOS UDP recv rate is caused by buffer management and thread management, the lwIP-RTOS UDP can receive 7 1000-byte packets at wire speed
- UDP blast 1000 8-byte packets: 66534 pps

- mbed lwIP uses MSS 1460, and TCP window of 2*MSS

* the TCP send for raw ether uses TCP-over-UDP described above
(using lwIP UDP on mbed K64F, faux TCP-over-UDP gets 41mbs, min RTT 549us)

The etherraw sketch is only a proof-of concept, providing insights for integrating the K66 ethernet with lwIP. The sketch uses 43K of flash and 39K RAM. One could develop a raw Ethernet API to do UDP by adding proper ARP management, transmit packet construction, receive buffer management, multiple streams, and handling gateway forwarding.

The proto beta K66 does not have unique MAC address in ROM, so MAC address is hardwired in sketch. Like other teensy 3's, the production K66 should provide unique MAC address from ROM (beta3 and later).


lwIP tests: (no RTOS NO_SYS=1)

Working with lwIP 1.4.0 and using Makefile with teensy3 core, I have developed some TCP/UDP examples, see
https://github.com/manitou48/teensy3/tree/master/k66lwip
The raw API (no RTOS) requires polling the Ethernet hardware and callbacks. I am not sure how to integrate the library into the IDE. There are lots of lwIP tuning options. Particularly for TCP, a lot of work is required to manage timers, buffers, and packet arrivals. One can appreciate the advantages of network co-processors like wizNet, WINC1500, and ESP8266.

The memory usage for the test sketch is 52KB of Flash and 40KB of RAM. Included in the RAM is 34KB for the Ethernet ring DMA buffers. (For comparison, memory usage for a similar mbed K64F program lwIP+RTOS is 58KB Flash and 55KB RAM.) Packet buffers are allocated/freed from the heap (malloc()), and additional stack RAM is consumed for automatic variables. memcpy() is used to move between ring buffers and packets. The mbed K64F RTOS lwIP uses zero-copy, but that requires additional house-keeping to reclaim transmit buffers.

  • k66 lwIP recognized ARP request and replied. K66 issued ARP request and handled reply. Using static IP address.
  • ICMP reply (ping) working, rtt min/avg/max/mdev = 0.127/0.135/0.231/0.021 ms
    ICMP port unreachable OK
  • telnet to k66 is properly rejected with TCP reset packet
  • k66 will forward traffic through gateway
  • UDP echo 8-byte RTT = 183 us, 20x1000 recv = 67 mbs, UDP send blast 20x1000 = 85 mbs BUT had to do a UDP echo first to establish ARP for target ?:confused: (see table above). UDP NTP query ok, code snippet below
Code:
     ether_init("192.168.1.23","255.255.255.0","192.168.1.1");
   ...

void ntp_callback(void * arg, struct udp_pcb * upcb, struct pbuf * p, struct ip_addr * addr, u16_t port) 
{
	if (p == NULL) return;
	if (p->tot_len == 48) {
		uint32_t secs = ((uint32_t *) p->payload)[10]; // NTP secs
		Serial.println(swap4(secs));
	}
	pbuf_free(p);
}

void udp_ntp(int pkts) {
    int i;
    struct udp_pcb *pcb;
    pbuf *p;
    uint32_t ms;
    ip_addr_t server;

    inet_aton("192.168.1.4", &server);
    pcb = udp_new();
    udp_bind(pcb, IP_ADDR_ANY, 4444);    // local port
    udp_recv(pcb,ntp_callback,NULL /* *arg */);    // do once?
    for(i=0; i<pkts; i++) {
        p = pbuf_alloc(PBUF_TRANSPORT, 48, PBUF_RAM);  // need each time?
        *(uint8_t *)p->payload = 0x1b;    // NTP query
        udp_sendto(pcb,p,&server,123);
        pbuf_free(p);
        ms=millis();  // ether delay
        while(millis()-ms < 5000) ether_poll();
    }
    pbuf_free(p);
    udp_remove(pcb);
}
  • TCP client and server working. TCP recv rate 51 mbs, but the K66 lwIP TCP send rate is less than 1 mbs, so some tuning required, buffer management? The sending packet instantaneous data rate has some pauses (500ms) with some peak rates of 24 mbs. See graphs in post #875. Tuning fix: Leaving the TCP fast-timer at 250 ms, increasing the TCP window to 4*MSS, and adding tcp_output(), the TCP send rate is 59 mbs. With the larger window, TCP receive rate increases to 81 mbs. See table above.
  • DHCP enabled and tested OK
  • tested OK with 1.6.11 and 1.30beta3 8/24/16
  • tried breadboard/jumpers of shield to beta3 board, failed. breadboard+jumpers not suitable for 50MHz RMII? :confused: 6/29/16. defragster reports beta3+female-headers+shield OK
  • lwIP works with female headers on K64 beta, change Makefile to build for K64 8/31/16
  • echosrv.ino, a TCP and UDP echo server works. UDP works, but TCP hangs when connect is not within 20 s ?
  • websrv.ino works and turns LED on/off 9/15/16
  • tcpecho_raw.ino derived from someone else's sketch, works (hack SYN_RCVD timeout to avoid 20s connect problem)
  • testing lwIP 1.4.1 9/16/16
  • make also works on MACOS (change teensy3 and tools symbolic links), make failed on windows/cygwin
  • lwIP multicast tests with sketch src/mtalk.ino showed could transmit and receive multicast. Use mbed driver's code to set GAUR/GALR registers for CRC/hash of multicast MAC/group.
  • Using stepl's ether_lwip.zip in the IDE and modifying boards.txt
    teensy35.build.flags.common=-g -Wall -ffunction-sections -fdata-sections -nostdlib -I/myhome/sketchbook/libraries/lwip/src/include
    I was able to build lwIP 2.0.2 sketch with IDE and tested httpd, ftpd, tftpd with SdFat on uSD (8/10/17). binary-mode fetch from uSD of SDTEST4.WAV (17173152 bytes) took 184.6 s with ftp (TCP), 8 secs with tftp (UDP), but only 3.7s with the browser or wget http://192.168.1.19/SDTEST4.WAV (37mbs). FIX:to lwip_ftp.cpp add tcp_nagle_disable(pcb); in ftpd_dataconnected(), ftp takes 2.38 s (58 mbs). Also tested apps/sntp, apps/iperf, DHCP, and DNS. Below is current consumption of T3.5 with ethernet shield from power up to doing 6.5s TCP transmit (6.6 seconds), sample rate is 100 ms. With no shield, an idle T3.5@120mhz consumes about 58 ma.
    t35ether.png

lwIP TODO:
  • lwIP tuning options: memory/pbuf's, LITE options, memcpy, checksum (integrate ether hardware checksum, if working)
  • if needed, enable/test: IP frag
  • DEBUG and stats_display() want to use printf
  • integrate yield() with ethernet polling
  • mbed uses zero-copy (need to reclaim TX buffers)
  • integrate multicast CRC/hash into lwIP API
  • integrate lwIP into IDE and add a little "class"
    Maybe someone more capable can figure out how to build lwIP in the IDE. My current thought is to use make to create liblwip.a, copy that into hardware/tools/arm/arm-none-eabi/lib/, and add -llwip to k66 build options in boards.txt. Then the IDE library would just need the lwIP include files. Update: see IDE with lwIP 2.0.2
    Paul would like to have the shield API be a drop-in replacement for existing Arduino Ethernet interface (maybe based on UIPEthernet?).​

Unresolved: ?:confused:
  • ethernet hardware checksums -- OK with stepl's lwIP 2.0.2
  • lwIP TCP server will hang if not connected to in 20s? -- OK with lwIP 2.0.2
  • shouldn't call ether_poll in a callback, serialization violation (ooops, i do it in some sketches)
  • develop appropriate API

github sources:
References:
K64 with ethershield tmpk64.jpg
 
Last edited:
UHS 3.0 on Teensy 3.5 with Host Shield has tested as OK. This means most other things likly work as well. Next will be native USB after more testing.
I'll push to github what has been done so-far.
 
Yes - it is 3D - all of mine were 3C as well - I wonder if that was the one on the 128x32 devices? Of course the silkscreen shows an ox7? address.
The other simple changes to get it off of wire are in the comments of this Adafruit_SSD1306/pull/1. Would be cool to make those edits in the PJRC source tree?
The number depends on who's i2c device numbering scheme: These can be solder jumperred to 0x7a which if you do 0x7a>>1 = 0x3d or to 0x78 which if 0x78>>1=0x3c

Also if doing a pull request, might be nice to add in call(s) to Wire.setclock(400000L); Not sure if you should just do this after the Wire.begin or if you should do it about each time you call beginTransmission...

Probably like the RA8875.cpp code does:
Code:
		#if defined(__SAM3X8E__)
			// Force 400 KHz I2C, rawr! (Uses pins 20, 21 for SDA, SCL)
			TWI1->TWI_CWGR = 0;
			TWI1->TWI_CWGR = ((VARIANT_MCK / (2 * 400000)) - 4) * 0x101;
		#else
			#if ARDUINO >= 157
				Wire.setClock(400000UL); // Set I2C frequency to 400kHz
			#else
				TWBR = ((F_CPU / 400000UL) - 16) / 2; // Set I2C frequency to 400kHz
			#endif
		#endif
And this is the only time it calls setClock. But sorry this is probably outside scope of this thread.
 
UHS3 update...
As expected storage works as well.
There is a stability issue that can occur with SPI if your wires are too long. While most of us do know this, I'm adding this as a note for anyone who may want to try, and has failures.
UHS30 "auto-ranges" the SPI speed. I've not checked, but I think it is running at the maximum of 25mHz.
This is fine for short bursts on CDC-ACM, but when you use storage, problems occur.

In any case, SPI is the bottle-neck here more than anything else.

Here's the results, if anyone cares.
This test was done with my worse performing USB thumb drive, which, for some odd reason pretty much stinks on read rates, even on a PC, go figure.

Code:
Start.


SWI_IRQ_NUM 30


USB HOST READY.
USB HOST state 1d
No media. Waiting to mount /
USB HOST state 02
USB HOST state 0a
USB HOST state 03
USB HOST state 0c
USB HOST state 0d
USB HOST state 60
/ mounted.
Removing '/HeLlO.tXt' file... completed with 4

Starting Write test...
File opened OK, fd = 1
Wrote 19 bytes, File closed result = 0.

Starting Read test...
File opened OK, fd = 1, displaying contents...
]-[ello \/\/orld!

Read completed, last read result = -1 (20), file close result = 0.
Testing rename
file rename result = 0.

Removing '/1MB.bin' file... completed with 0
1MB write timing test  2048 writes, (0), (0),  2282 ms (2 sec)
completed with 0
1MB read timing test 2048 reads, (20),  2735 ms (3 sec)
completed with 0
Directory of '/'
-rw--a      1048576 2016-07-08 23:27:40 1MB.bin
-rw--a           19 2016-07-08 23:27:36 newtest.txt
-rw--a           53 2016-04-25 08:32:22 TABTEST.TXT (tabtest.txt)
-rw--a 531044984 2015-00-15 02:00:06 ASTUDI~1.EXE (AStudio6_2sp1_1502.exe)
-rw--a        38936 2015-01-16 04:04:52 orn.ps
-rw--a        92576 2015-05-26 06:01:26 switch.jpg
-rw--a        66180 2015-05-26 06:01:38 regular.jpg
508592128 bytes available on disk.

Flushing caches...
Remove and insert media...
 
Paul: As you get comfortable exposing more committed details- there are probably things that could help summarize the lessons learned from groups 1 and 2 - as the next larger group goes out.

I got as far as: perhaps a quick and dirty printable pin reference table (like I prematurely posted) - maybe you have one in the works - I think I summarized the current list to a CSV file that I could import>export as PDF/Excel sheet and colorize (working from a solid ref. might keep the smoke in things and certainly aids in putting wires/testing more efficiently). The other might be a single new K66 thread (semi-Wiki) where beta recipients could use one post each to summarize details and github or usage details of their progress. Basically a condensed summary of stuff already in this this 28 page thread - but would be a shorter read or up to date info in one place. And could be easily be indexed in one post by subject area.
 
@Paul,

I think it's a good idea to dig into that SD-Problem (writing >120MHZ leads to FAT-corruption) - at least for me.
I wasn't able to identfy the problem so far..
I decreased the SD-Speed to 10MHZ - does not help.
 
Last edited:
Talkie may be a bit scratchy (haven't hooked up T_3.2 to compare)? Susan Vega Diner clearly not going well - ( with first-PJRC-Version TALKIE or as shipping ). Talkie plays almost as well or better at 16MHz as 240 MHz now, some clock changes may have altered that? 16 MHz it seems was noted as bad last night - where pitch/speed was off - same now.

Could you please test Talkie from here ? (Output is now DAC0-PIN as stated in the readme) Is it better ?: https://github.com/FrankBoesing/Talkie/tree/patch-1
 
I just tested with T3.0, T3.2 and T3.5 (180Mhz) and works, there's no so big changes from p7, a couple of bug fixed and the K64/66 identifiers. ...

K66 with TFT_ILI9163C v1.0.8 works great at 240 MHz - on samples Somebars and Benchmark as written SPI0. New crimper put pins nicely onto wired socket - unique colors got it properly connected.

Swapped display to ILI9341_t3 and it is running Benchmark fine as well at 240MHz.
 
@Paul,

I think it's a good idea to dig into that SD-Problem (writing >120MHZ leads to FAT-corruption) - at least for me.
I wasn't able to identfy the problem so far..
I decreased the SD-Speed to 10MHZ - does not help.

Let's hope that it is not a strange hardware-problem..

- it did not help to insert delays
- it did not help to reduce the speed.
- it did not help to switch to SDHC_TRANSFERTYPE_SWPOLL instead DMA

BUT there are a lot of warnings.. anyway i supect it has nothing to do with it, since it works good for <=120MHz.

Edit: The capacitor between the socket and the chip - is it connected to the socket or to the chip only ?
 
Last edited:
...hm... will it cause powering problems to use SD AND 2nd USB AND Ethernet ? (..AND the audio-shield) ?
 
Status
Not open for further replies.
Back
Top