Announcement

Collapse
No announcement yet.

Constructive feedback for FIRST on the next generation of hardware

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • MikeRush
    replied
    FYI, from part 1 of this years manual the Samsung Galaxy S5 will no longer be allowed.

    Leave a comment:


  • HumanJHawkins
    replied
    In case this thread is still alive, I'll add my 2 cents.

    We had a significant issue related to the durability of phone and hub USB ports. It would be a huge benefit to reduce the frequency which teams need to plug and unplug devices... If we don't get rid of the phones as some suggest, then switching to something way more current than USB Micro on the Rev Expansion hub would help.

    For example, if the hub had a Type-A female port, it would allow the use of a simple power Y-cable to enable charging without unplugging (i.e. https://www.amazon.com/gp/product/B074V361J6/). I have tested many alternatives, but only ever achieved a semi-reliable connection due to the number of adapters required to get between mini-USB and the micro USB3 port of a Galaxy S5.

    Speaking of the Galaxy S5, as they have not been manufactured for years, the market for them is disappearing. However, as they have a significantly better camera and other hardware than other FTC legal options, they have been the phone of choice for many teams. For 2019-2020 (or perhaps 2020-2021 if it has to wait), we really need an option in current production that has the camera quality and processing power of the S5 (or better).

    I agree with most of the other suggestions here. It is great to see this discussion. I look forward to seeing what improvements can be implemented for the next few seasons.

    Leave a comment:


  • ftcterabytes
    replied
    I stumbled back into the forums after years of being away. This conversation caught my eye because of things I've seen with our team on the controls side. When I say controls I'm specifically referring to the real-time, dynamics of the software-machine interactions. I don't completely grasp all of the protocol things happening over WiFi that contribute to variable latency between drive stations and robot controllers.

    My experience mentoring a team that existed using the Lego NXT + Samantha and then transitioned later to the Android platform was: THIS IS GREAT! The first year with this change to Java was a huge improvement: a real language with a real compiler that really worked (as opposed to Robot-C which didn't always generate code for the things it could parse.)

    That first year I was able to teach the programmer how to implement a closed-loop speed estimator. It ran on the Java side and was using the ModernRobotics brushed DC motor controllers. In decompiling the SDK code, I could see strange things that would tend to de-regulate loop cycle time and was a little disappointed. Yet, looking back that environment rocked. Video of that robot doing it's speed control thing here.

    The shock that hit the next year with the new SDK was that the speed estimator became wildly noisy. This control strategy no longer worked. Digging deeper, because the team has logging code that will show timestamps of every loop cycle, it was clear that the timing was still stable but, because communication with the motor controller was now decoupled from the loop() method calls, they would jitter past each other. I believe that this was the "noise" that was introduced in that SDK version that crippled that speed estimator.

    This year, with whatever the newest, or nearly newest version of the SDK was, we saw some other things that were a little mysterious. As development continued through the season and software was added, the drivers reported that "lag" (really latency) was getting worse. I worked with the programmer to review the logs and do some experiments. Indeed, loop() cycle time was working out to be a painful 150ms. One thing that seemed to be happening was that the fewer methods that were called to set values on or retrieve values from the RevHub, the faster the loop times. It smelled like each method call for sensor/encoder reading that was added caused additional transaction over USB with the RevHub.

    If that's how it works in there, it's just borrowing trouble. If that's what's going on, it would amount to a performance penalty hit for using sensors and other feedback; that seems like building a wrong set of motivations for students learning how to use this stuff.

    I am of the opinion, fairly strongly, that there's a whole lot that is right about the Android + RevHub combination. Cost, availability, size, connector orientation is lots better than it was.(Although, Yerko42 pointed out a problem I hadn't hear of before but makes sense -- phones walking away.)

    I'm also of the opinion that there needs to be some input into the design of the SDK, USB comms architecture and RevHub firmware from mentors who know about how to architect control systems that don't cause software grief in the midst of trying to make the machine do what we want it to do.

    Also, 11343_Mentor I hear what you're saying. Android + Java ain't the way I'd architect the whole control loop. On the other side of that, I think there's still a large amount that can be done to make the hardware platform that's there do a whole lot better from a controls standpoint, at the kind of mechanical time constants that we're dealing with for these teams.

    I'd love to see this aspect of the system get better (loop time jitter, input-process-output synchronization), with examples supplied with the SDK of how to do some of the control tasks at a higher level. The reason I'd love this: teams could learn how to do control, intuitively, without these kind of erratic contributions from the underlying software architecture.)

    JoAnn Who's a good person at First to discuss this with? (Yes, I know worlds is upon you all. I'm not expecting anything until you all do it and recover.)

    Leave a comment:


  • DanOelkeFTA
    replied
    My background. I've been working with FTC for a number of years, including a few with the previous control system. I have developed distributed embedded systems for decades, and taught networking at the grad school level. So I am definitely not a neophyte in this area. I realize however that makes me blind to some of the problems that new people encounter.

    Each robot being it's own network is not that bad *if* the number of robots is kept to a reasonable level. When we hit large tournaments (>48 robots) having everyone on one channel does lead to channel congestion. And even splitting to two channels doesn't always keep latency reasonable. We usually don't have 3 channels because one of the 3 usable ones is kept for things like the scorekeeping system. The way that WiFi handles collisions and has ways to limit collisions works remarkably well with modern systems. Way better than what we had with the Samantha modules.

    Yes - if all the robots were connected to the same hub we would have better congestion handling. Everyone connected to the hub would cause there to be fewer collisions and the number of robots would scale higher. The downside is that now you have to have a hub to do this and reconfigure all the robots and driver's phones to use this hub. And when teams are back in the pits (which sometimes are pretty close to the playing field) they would be required to switch back and forth or the pit practice could interfere with the competition. Getting people that can configure the hub is something that many regions struggled with. And then the teams struggled with the connections because it was different than how they were practicing. I think that while we see some problems with the WiFi direct it is far better than when we did have a hub and central control system.

    So - I think that having a central hub could help - BUT - the benefit is only for large tournaments and the downside outweighs the benefit.
    I have gotten phones for the team I coach that can use 5Ghz as a way to avoid congestion at bigger tournaments.

    I didn't like the Blocks or OnBot Java. But then I helped some new teams. I completely changed my tune and I think that the Blocks is a great way to make it easier for teams to get started.

    It is a bit more complicated to set up - but I have added a WiFi dongle to programming laptops to have it connected both to the local WiFi for web searches and use the built in laptop's Wifi to connect to the phones. Makes programming much nicer than switching WiFi networks all the time. (although the kids seem ok with that too)

    Leave a comment:


  • ftcsachse0
    replied
    I will first admit I am just getting into this having two registered teams to compete next year in the early registration. But I believe this also presents a unique chance to give perspective. Barrier to entry needs to always be considered. This is both economic and technical. It is easy to forget that simply asking today's youth to come up with something from a bunch of metal parts is a hurdle in itself. My experience is that none turned to Google without prompt. The system needs to be well documented with links to where to find resources. (I can hear some saying that isn't germane but nothing exists in a vacuum.) In fact, it may help to start with describing the experience and requirements along with some options.

    Now one concern of mine is the sheer number of robots that don't move at all. It was astonishing to me how often they had complete failure to move at all. I have serious doubts that this can be explained by user error when the buzzer sounded. Some yes, but not as many as I saw. The design needs to consider this from the start.

    I have a couple concerns with the entire concept of each bot being on its own network. The first being the spectrum limitations. In the 2.4 GHz band, only three channels will fit without overlap. Given there are four teams competing and at the single tournament I went to which seemed small there were four getting ready while four competed making at least 8 active networks before considering any team tweaking their code or at the practice pit. While not an expert, my years of experience would suggest that they would be better off sharing a network as that seems to get better throughput. It should be noted that the rules can't do anything about the hot spots that various spectators might create or carry in. Or the networks of the venues.

    My thing is that having multiple bots on an established network could have additional benefits such as allowing for a team to quickly search for something while staying connected to the bot they are trying to program. As it is I can see that my youth will learn to change which network their laptop is connected so they can jump back and forth. Fortunately there isn't a ton of chatter in the web interface to prevent that.

    I will say that I consider having the two programming interfaces via web browser is a strength and hope that doesn't go away. It lowers the barriers. If they allowed this on an existing network it could further be enhanced with some links to online reference material. Perhaps better would be to have a copy local with ability for the local to update itself, but every bell has a cost. While I like the idea of a bot with no supporting hot spot, the cost of a hot spot really isn't a barrier and more is to be gained in my mind when you put the bot on an established network.

    Leave a comment:


  • 11343_Mentor
    replied
    At the risk of sounding like a broken record, the fundamental problem of the overall architecture is trying to control a machine with a software environment that cannot reliably control anything. Again, trying to run on Java for robot control is really dumb, quite honestly. If I was designing this system, I would use a microcontroller running on machine code written in C and compiled, then interface with the UI controls with the current phone environment. I work daily with controls that run loops in the sub uS range, with nS repeatability. We interface to our higher level UI controls that are in the millisecond domain. FTC should give the option for custom controllers and third party controllers. Our students would then learn about how embedded controls are actually done in the real world instead of the internet-app world that FTC is living in now.

    Industry needs more young people getting into embedded software and machine control. FTC is failing our younger people by leading them down the internet-app Java path.

    Leave a comment:


  • FTC3550
    replied
    After doing some testing about a year ago, one of the issues with the latency and jitter in the communications is the USB Serial adapter that is being used inside the REV Hub and has been used with the Modern Robotics Devices. While I understand that this may have been a choice to push something out there at the beginning, if there are plans to continue using Android phones with USB, new hardware should have raw USB protocol capabilities. One of the reasons that there is significant jitter in the current system is the USB bulk endpoints that are being used for communication to the serial converter. Bulk endpoint transfers occur only if there is sufficient bandwidth available on the bus, and only after all other transactions on the bus have occurred. However, there are other endpoint types that provide better timing guarantees like the Interrupt and Isochronous endpoints. Although there are only a few devices (or in some cases 1 with the REV Hub) on the USB bus, using these types of endpoints may have better timing guarantees from the kernel as well, since they specify data that needs particular timing characteristics. Using standard USB packets would reduce the overhead of sending the current serial header and checksum data since this is already part of the USB packet, and allow usage of the full bandwidth of the device. More info about USB endpoint types can be found here.

    As a simple test, I setup an Android phone with a Teensy 3.5 using the HID Raw interface. This type of interface sets up an interrupt endpoint in both directions, and I set the endpoint intervals to the minimum 2 ms. The full packet size of 64 bytes was used for each message, but only one byte was used to toggle an output to test the timing with a square wave sent from a simple Android application. I could reliably get packets sent with almost no observable jitter at 20 ms per packet sent to the device (50Hz). Although the device should be able to theoretically get close to 2 ms between each packet, the jitter characteristics significantly degraded with reduced periods. Each packet was received correctly, but the time between packets was anywhere between 4 to 16 ms. Although using this type of endpoint could reduce some of the issues that teams are seeing with latency and jitter, it would not fix the issue, and still is only able to update reliably at about 50 Hz. Isochronous transfers may have slightly better characteristics, but the results are likely to be similar.
    See tests and results here.

    With the Android phones, there really isn't any other option for communication to the motor controllers and sensors. Even with the optimal USB endpoint types, the issues of latency that teams are seeing with their hardware cannot be fixed without switching to an embedded processor. The Android operating system on the phones is also locked down, preventing tweaks to the kernel that could potentially improve some of these issues. With a proper embedded device, the operating system (pick your favorite) could be tuned to robotics applications, and updates to the operating system would remain in control of the developers for tweaks and bugfixes that don't get fixed in some Android phones. This would also allow teams to use sensors more effectively since they could be connected directly to the SOM instead of waiting for USB packets to be buffered and scheduled. This was the case with the old NXT based system, where sensors could be polled reliably at above 100Hz, while teams can now barely manage reading from a sensor in the 20-50Hz range depending on what other devices are connected. Although there is more processing power with the new system (which is great) the bottleneck remains with the IO, and is worse than the old NXT system in this aspect. Qualcomm processors and software can still be used as ejschuh mentioned if that is an issue.

    Leave a comment:


  • Alec
    replied
    Hello Yerko42. Thank you for your input! I believe most FTC mentors have very little insight into what goes on in FRC (we are too busy trying to figure out what goes on in FTC ;). Your input helps us a great deal.

    I assumed that there must be some compelling reason that FTC cannot use the FRC control system. Perhaps FTC's partnership with Qualcomm requires that the FTC control system run primarily on Qualcomm processors (I'm just guessing here).

    Leave a comment:


  • Yerko42
    replied
    I'd just like to throw a wrench in the mix here: FIRST already has some pretty awesome and reliable hardware, in FRC (NI RoboRio). I mentor 1 FRC team and 2 FTC teams. I dread dealing with the FTC control system. I agree with many of the other mentors on the forum that the control system is kind of ruining the function of FTC (Inspiring kids). The FTC set up is a drag the entire season it's hard to tell if your code is messed up or the hardware is glitching out or why the phones wont connect. But with the FRC set up I can, and have, set up many rookie team members and left them with a basic understanding of the code and control system and they thrive. Its consistent reliable and flexible.

    Looking through some of the comments in this thread and others I'm worried. There are a lot of really smart people on here that are frustrated with the set up myself included (except the smart part) and it seems like there is a lot of suggestions of discrete components and/or talk or repackaging products to be suitable for FTC. I really don't get why? Why try to turn FTC into a product development project. I think that is what got us into this mess in the first place. NI is in the business of building products like this I don't think FIRST should be.

    Some of my arguments:
    1. NI RoboRio is professional, at least professional enough. As a super green mentor I walked into a mess of panicked students and teachers half way into a rookie FRC build season I had no idea what was going on but recognizing NI equipment that I had used at work gave me confidence that I could get these kids up an running and I did. Mirroring technology in industry matters. It makes it easier to find mentors and being able to relate tasks on the robot with tasks at work makes it relevant and real for the kids. Sure there are people out there embedding dev. boards into products but there are a lot more electricians, lab techs, Instrument techs, DCS techs/engineers, etc.
    2. Cost: 1 RoboRio, 1 mesh radio and 8 spark mini motor controllers is $810, 2 Expansion hubs and phones are $500. Yep the RoboRio is more expensive... but at the moment our rev system is a brick unless we're outside of the school. I'm thinking it is the schools air marshal but I am having a really hard time getting the IT folks to even talk about the prospect of them having air marshal. If your an industry mentor like myself you have to take time off of work to put your face in someones office to fix stuff like this. That extra $310 seems awfully worth it right now... I'll say $330 since you would need some power distribution and could use this ( https://www.digikey.com/product-deta...005-ND/2051177 )
    3. We've never had a RoboRio "walk" but we have had the phones disappear. Especially as FIRST pushes into target demographics please get rid of the phones. Some kids are in pretty crappie situations and a phone is a super powerful tool/escape and thereby highly likely to get stolen. The RoboRio is pretty unexciting at home.
    4. Footprint is the same or smaller that the phone and 2 hubs set up. If the Control hub comes online it might be smaller that the roborio and radio set up. Again I think it is worth the reliability.
    5. @ejschuh gets to connect via ethernet with a clip, you can upload code via usb or ethernet or even test tethered without fussing with wi-fi at all. the kind of trouble shooting you get to see in the real world.
    6. Rookies can get their bot running in minutes and @Alec gets to dive into the world of CAN and tape all of the rest of his ports closed.
    7. No XT30's built into the board. I think they are great for RC planes but I've had to replace at least one for a team at every competition we've been at since the middle of last season. I think this may be the one way the the MR system was superior.
    8. Opens the possibility of Field Management System in FTC.
    9. Allows teams to build their own driver stations or use whatever controller they want.
    10. We've ran unofficial events with like 40 teams running with their FRC radios on with no problem.
    11. The WPI robotics Library is awesome
    12. You can run C++, Java, or LabView. Or python if you want.
    13. Consolidating hardware will make it easier for FRC teams To mentor FTC teams. And it will make the transition to FRC easier for teams that have the entire FIRST pipeline
    14. 5V sensors!!! this 3.3V business really burns me out.
    15. I can get a rookie coach/mentor up to speed and comfortable enough with the system to guide students on improving it. The current set up invites more frustrated sighs, head scratching and hands in the air than constructive teaching and learning.
    I'm sure there is more but I'm going to head into the kitchen and help make some x-mas empanadas. If any of you have thoughts or holes to blow in my arguments they would be much appreciated. Or if your interested in steaming forward on the NI train with me and convincing FTC to at least make it legal them maybe we can work together to iron out any kinks. My team could prototype an FTC set up with a RoboRio if there is interest.

    Goodnight Friends

    Leave a comment:


  • Alec
    replied
    I came across some tutorials on CAN bus layer one and two specifications and protocol. I am absolutely astounded at the simplicity, elegance, effectiveness, and overall shear genius of the CAN bus architecture.

    I've come to realize that it is essential for roboticists and technologists to have a basic understanding of CAN bus architecture. I like this particular tutorial, and there are other great tutorials.

    Leave a comment:


  • Alec
    replied
    Originally posted by 11343_Mentor View Post
    Alec, all good points, but let me throw in some other perspectives. You use the term real-time in several points of the above discussion. This is not true, since the architectures you describe cannot be real-time as they are based on Java. They may be fast enough to get done most of what is needed for robot control, but ultimately using the wrong tool for the job is not teaching the upcoming generation of design engineers how things should be done. Thus, I take issue with your statement above as they are not learning standard control system architectures. As I have indicated before, leveraging the Java ecosystem is fine, e.g.. extensive GUI libraries for example as well as existing high level communication and control. Where Java falls down is in direct motor control, as Java is not ultimately predictable. A better architecture would be to have a microntroller running compiled C opcode performing all the real-time control tasks interfaced with the SOM perfoming the high-level tasks.
    Thanks for the feedback. We are in agreement. We just need to clear up some misunderstandings...

    By "architecture" I was referring to the Roboteq control network architecture. To me, the Roboteq control network looks like it would be a standard architecture for motion control.

    You can throw everything away and start from scratch on the Roboteq control network. Alternatively, you can slowly migrate some or all of the Robot Controller/SDK over to the Roboteq control network and/or to other hardware, starting with the pieces that need to run on a realtime stack.

    With the Robot Controller/SDK running on the DragonBoard (or other SBC/SOM) instead of a phone, you have the option of running the Robot Controller/SDK on a realtime variant of Android (say SBC/RTAndroid). You also have the option of running a realtime variant of Linux on a SBC (say SBC/RTLinux), but you would have to port the Robot Controller/SDK to Linux.

    So you would have parts of the OpMode/SDK running on a realtime OS on a SBC, and parts running on the Roboteq control network. You can keep the Driver Station as-is, or decide to refine the DS as well.

    At some point you may come to realize that it is not necessary nor advisable to migrate the Robot Controller/SDK completely off of SBC/RTAndroid or SBC/RTLinux. You may even find that you need not change today's SDK very much at all, and find that you just need to migrate the realtime pieces of your OpMode to the Roboteq control network as I had mentioned:

    Originally posted by Alec View Post
    ... Once support for a Roboteq control network is added to the FTC SDK (or added to your OpMode), the real fun can begin: you can create and load simple or complex subroutines onto the Roboteq control network to run distributively/in parallel in realtime on the control network!!!
    I think that even if you eventually find that you need to migrate completely off of phones and SBCs, migrating is a better option than starting from scratch on other hardware.

    The important thing is the Roboteq control network will allow students to get hands on experience and exposure with the types of control system architectures they will be facing in the real world. And, as a bonus, students & mentors will get to spend much more time on STEM education if they are not having to deal with the issues of the current architecture and hardware.
    Last edited by Alec; 12-08-2018, 11:58 AM.

    Leave a comment:


  • 11343_Mentor
    replied
    Alec, all good points, but let me throw in some other perspectives. You use the term real-time in several points of the above discussion. This is not true, since the architectures you describe cannot be real-time as they are based on Java. They may be fast enough to get done most of what is needed for robot control, but ultimately using the wrong tool for the job is not teaching the upcoming generation of design engineers how things should be done. Thus, I take issue with your statement above as they are not learning standard control system architectures. As I have indicated before, leveraging the Java ecosystem is fine, e.g.. extensive GUI libraries for example as well as existing high level communication and control. Where Java falls down is in direct motor control, as Java is not ultimately predictable. A better architecture would be to have a microntroller running compiled C opcode performing all the real-time control tasks interfaced with the SOM perfoming the high-level tasks.

    Leave a comment:


  • Alec
    replied
    Originally posted by Alec View Post
    ... Roboteq controllers require minimum 10V DC supply. For a decent margin, you will probably want to supply 18 or 24V to the controllers. You can use two Tetrix batteries in series to supply 24V to the Roboteq controllers. BTW, FTC batteries are long overdue for an upgrade. I bet a lot of the failures at competitions are caused by the 3Ah Tetrix battery, which I feel is inadequate given the increases in loads that have been allowed over the past few seasons, and given the flakiness of the current electronics offerings from FTC.

    As for pricing, a 2 channel Roboteq controller lists for $250 USD. Considering the features, time savings, customer support, and educational value to students, a $250 price point for a 2 channel controller is highly reasonable. Roboteq will probably be inclined to offer their controllers to FTC teams for a lot less, to support the FIRST cause and/or for volume.
    Also, to lower the price points further, Roboteq could produce FTC specific models of their controllers with lower power specifications. A four channel model for FTC at $250 and a two channel model at $150 would be quite reasonable considering the MR and Rev price points.

    Roboteq need not replace MR and Rev. Roboteq would give teams an alternative that might better suit a team's needs and interests.

    Leave a comment:


  • Alec
    replied
    ejschuh, I appreciate all the thought you've put into this, and all the time and professional services your are offering to FIRST. I strongly believe the professional services you are offering to provide on a volunteer basis are just as valuable and appreciated as the time and services of any other FIRST volunteer. Your services have the potential for exponentially reducing the workload and frustrations of thousands of other volunteers and students, which in turn will give mentors much more time to teach STEM education, and students much more time to learn. Not to mention that a flawed and flaky control system works against the mission of inspiring kids to pursue STEM education.

    I think we need to do a bit of market research before undertaking to design the next generation control system. RoboteQ produces SOMs ("controllers") that can control brushed DC motors at 20+ amps per channel, and offers models having one, two, or three channels.

    Roboteq controllers are designed to be meshed together on a CAN Bus network. Thus if you have four drive motors you can mesh together 2 x two channel controllers (SDC2160) to create a single 4 channel control network (making it feasible to implement effective PID control of holonomic drive systems). To control 7 motors, you can mesh together 2 x three channel controllers (SDC3260) plus 1 x one channel controller (SDC2160S) to create a single 7 channel control network. Servo, sensor, and general I/O interfaces are built into the Roboteq controllers.

    A Roboteq controller is a standalone robotics computer running a realtime controller OS designed to mesh together with any number of controller nodes on a CAN Bus network.

    By creating a mesh of controllers on a CAN Bus network you can scale the processing power of the control system while scaling the number of motors, servos and sensors under realtime control and/or query.

    If additional servo, sensor, or I/O interfaces are needed, they can be added to the system by adding Robot IO eXtenders to the CAN Bus network. The eXtender controller is also a standalone computer that further scales the processing power of the control system.

    Typically you will want to minimize the number of controllers, so you will mesh together a small number of larger controllers. Or you can opt to mesh together a larger number of smaller controllers, so as to increase the overall processing power and/or parallelism of the control system.

    You design your control program to run distributively on the control network. There is a visual designer which you can use to design and simulate your control program on a PC. After your control program is verified under simulation, it can be compiled into bytecode that is loaded onto the controllers on the control network.

    Out-of-the-box, the controllers are configured to act as slaves under the control of a control program running on a Robot Controller computer, such as a PC, smartphone, Raspberry Pi, DragonBoard, etc.

    The Roboteq controllers have more features than you could ever imagine, including conformance to ESD protection and protection verification standards. Check out the Datasheet and the User Manual.

    Apparently there are open standards for motion control (PID control). Roboteq's controllers implement the "CiA DS402" open standard for motion control.

    A DragonBoard 410c running Android OS + FTC Robot Controller can connect to a Roboteq control network either via CAN Bus interface (mezzanine required) or via the DragonBoard's built-in UART interface. A single UART connection to any controller on a Roboteq control network is all that is needed. The controller that is connected to the DragonBoard via UART acts as a gateway to the other controllers on the CAN Bus notwork.

    So pretty much all you have to do to enable the FTC Robot Controller to control a Roboteq control network is create classes that wrap control and query commands that are sent & received over the UART connection to the Roboteq control network. Roboteq will probably even do this work for FIRST.

    Speaking of customer support, for larger orders, Roboteq will customize any aspect of their controllers, i.e, processor, firmware, packaging, connectors, power ratings, etc, to suit a customer's particular requirements.

    Note that a team can create the wrapper classes in their OpModes without having to wait for FIRST or Roboteq to provide wrapper classes in the FTC SDK.

    Once support for a Roboteq control network is added to the FTC SDK (or added to your OpMode), the real fun can begin: you can create and load simple or complex subroutines onto the Roboteq control network to run distributively/in parallel in realtime on the control network!!!

    Roboteq controllers require minimum 10V DC supply. For a decent margin, you will probably want to supply 18 or 24V to the controllers. You can use two Tetrix batteries in series to supply 24V to the Roboteq controllers. BTW, FTC batteries are long overdue for an upgrade. I bet a lot of the failures at competitions are caused by the 3Ah Tetrix battery, which I feel is inadequate given the increases in loads that have been allowed over the past few seasons, and given the flakiness of the current electronics offerings from FTC.

    As for pricing, a 2 channel Roboteq controller lists for $250 USD. Considering the features, time savings, customer support, and educational value to students, a $250 price point for a 2 channel controller is highly reasonable. Roboteq will probably be inclined to offer their controllers to FTC teams for a lot less, to support the FIRST cause and/or for volume.

    Best of all students get to learn valuable skills and knowledge of standard control system architectures which are highly marketable. Rather than having to learn the proprietary control system that FIRST is developing, students get to learn standard control system architectures on mature production quality hardware that are in use today in industrial environments all over the world. I think FRC might even become envious of a FTC/Roboteq control system.
    Last edited by Alec; 12-03-2018, 05:56 PM.

    Leave a comment:


  • JoAnn
    replied
    Originally posted by ejschuh View Post
    Hell FIRST. I have been really struggling with hardware issues this season so much so that 7 out of 10 matches for my two teams were decided by hardware issues. The main issues were the friction lock power connectors on the REV hub and USB connections. I would like to give constructive feedback for the next hardware that I know you are working on.

    1) Place a SOM on the REV controller to take the place of the phone. By using a SOM you can upgrade the Android portion but just removing it and replacing it at a cheaper price than a phone. One I would recommend:

    https://www.variscite.com/products/s...napdragon-410/

    2) All connectors should have a lock. The power connectors not locking is a huge oversight.

    3) Add this eeprom EEPROM to the robot controller https://www.digikey.com/product-deta...4-2-ND/5872981

    This EEPROM is 30 cents and it will add NFC to the robot controller.

    4) Select a driver phone that has NFC. Through the NFC app pair the phone to the robot controller by writing all the phone parameters to the eeprom.

    5) Make the robot controller an always on wifi direct hotspot and once the FIRST app sync through NFC the WIFI direct pairing is done.

    6) All user interface functions and updates for both the robot controller and phone are done through the app.

    I feel these will solve most issues teams have. The FTA told us 25% of matches had at least one connection issue.


    Thanks for your suggestions! I'll share them with our technology review commitee.

    JoAnn

    Leave a comment:

Working...
X