Jump to content
THIS IS THE TEST SITE OF EUROBRICKS! ×
THIS IS THE TEST SITE OF EUROBRICKS!

dsharlet

Eurobricks Vassals
  • Posts

    13
  • Joined

  • Last visited

About dsharlet

Spam Prevention

  • What is favorite LEGO theme? (we need this info to prevent spam)
    technic

Profile Information

  • Gender
    Male

Extra

  • Special Tags 1
    https://www.eurobricks.com/forum/public/style_images/tags/technicgear2.png
  • Special Tags 2
    https://www.eurobricks.com/forum/public/style_images/tags/mindstorms_ev3.png

Recent Profile Visitors

630 profile views

dsharlet's Achievements

Rookie

Rookie (2/14)

  • First Post
  • Collaborator
  • Conversation Starter
  • Week One Done
  • One Month Later

Recent Badges

  1. Thanks for the tip. Between that, and my strategy of just deleting the axle, rotating the parts, then putting the axle back, I've managed to get all of the gearing and rotation issues I mentioned resolved. I also found that you cannot rotate black friction pin connected parts. If you replace those with gray frictionless pegs, then you can rotate, then put the black friction pegs back. The hose issue I also figured out. There's a blue 3L connector going through the two ends of the hose. I simply flipped the connector over, and it was able to connect...?! The one issue I have now is that the old 14 tooth bevel gear seems to be missing entirely from the parts catalog, including the "LDD extended" theme. But, I doubt it would let me put that gear in anyways, since it doesn't *quite* mesh correctly with the 28 tooth turntable (in the real world, you have to cheat a bit to get it to mesh smoothly).
  2. I've made some progress... I've found that if I delete the axle I'm trying to rotate, replace it with a pin/axle connector, then I can rotate it. So I rotate it how it needs to go, then I delete the pin, then I can put the axle back in, and it rotates the axle to fit the (now rotated) gear. This seems like an absurd workflow... and it only works for some of my missing gears. There *must* be a way to rotate axles...
  3. Hi all, I'm having some trouble with LDD. I've built most of my model in LDD, I'm down to adding a few gears and bending a few parts that I can't get to connect. The problem I'm having with gears is that they collide when I try to insert them meshed together. This is perhaps not surprising, I need to rotate one of the axles to make the gears mesh. However, I simply cannot figure out how to rotate the axles. The hinge tool seems to just refuse to accept any rotation. I've attempted to follow several sets of instructions I found online, but something is different about what I'm doing that I can't figure out, because LDD rejects any changes to the angle that I make. I wish I could be more precise about the problem but I'm not sure what is going on! There are really a bunch of missing gears in the model: Both differentials missing the middle 12 tooth bevel gear I haven't successfully meshed any 8-24 tooth connections I was able to mesh a vertical 16 tooth gear onto the differential 16 tooth gear, but I haven't been able to mesh two 16 tooth gears horizontally. Somewhat related, I also need to be able to rotate this part above the knob wheel with the red axle connector: Interestingly, I was able to rotate a similar part elsewhere in the model. The part I was able to rotate was connected with a axle-peg connector though. It seems like I've been able to rotate whatever I want, as long as the rotation does *not* involve an axle!? Another problem I'm having is with flex hoses: I was able to bend the top hose into place, but the bottom one simply will not attach (they're supposed to form a circle). It seems to snap into place every so often, but, it's not highlighted in green, and when I release the hose tool, it jumps back to the way it is in the screenshot. Finally, probably the trickiest one, is I need to rotate the two 4x4 angle liftarms near the bottom of the model, such that the two green circled holes line up and have a pin connecting them: I'd appreciate any suggestions/pointers with any of the above issues! The model I'm trying to build is this (with some differences obviously - I didn't have all the parts I wanted in real life at the time of this video :)
  4. I've built a few clocks with LEGO over a few years, but none of them have been practical to use. To be practical, it needs to run for at least 24 hours, and be really easy to rewind. Having to spend more than 10-20 seconds with a winding wheel is too much of a pain (and my last clock took way longer than that!). The main difference between this clock and my other clocks is the drive weight is on a chain, and the chain can be simply pulled back through the clock to rewind it (driving a ratchet instead of the drivetrain), solving the ease of rewinding problem. Some other details: The escapement is a Galileo escapement with a 40 tooth wheel The hands can be moved forward by hand to set the time via a differential ratchet The drive weight is ~600g (11 boat weights x 53g, plus the parts to hold it together), the chain is a loop connected to the bottom of the weight to balance the drive weight The ratio between the chain and the escapement is 416:1 (40:24, 2:1 differential, 40:8 x 3) The weight falls 2.59cm/hour, so with 3 feet of drop, the clock will run for about 35 hours (it's currently mounted a little over 5 feet off the floor, so it could be given enough chain to run for about 60 hours). It should be accurate to within a minute per 24 hours, but it will take some time to dial in the position of the bob on the pendulum The single thing that helps the most with efficiency is a knife edge suspension for the pendulum, I learned this technique (and probably other techniques too) from https://www.youtube.com/user/KEvronista and https://www.youtube.com/user/BenVanDeWaal Here's a video: And here's a slow motion video of the escapement: I hope you find it interesting! I wanted to share the chain drive technique because I haven't seen it before, and it really makes a LEGO clock a lot more fun and less of a pain in the butt than a string on a spool :)
  5. It's really both. The articulations are problematic because they depend on axle-bushing clutch power for stiffness, which is terrible. They also depend on the rigidity of the non-friction pins in the dimensions other than the rotation axis, which is pretty bad. The small technic turntables are new to me, I didn't know about that part! I'm not sure it would be useful in this case, but it very well might be. Two of the 3 joints on each arm are basically universal joints, which are tough to build rigidly and compactly. The other rigidity problem is that the precision of a delta robot depends on maintaining parallelograms for the "forearm" sections of the arms. It's difficult to build reinforcements to maintain this because the arms need to be able to squeeze together, so you can't build anything in between the arms to hold them together. Even adding just an axle between the two beams of the parallelogram limits the range of motion fairly significantly, and more importantly, means that the delta robot controller needs to be aware of these limits, which complicates the kinematics greatly. I've since built better joints for a delta robot for another project, but haven't rebuilt the catcher robot arms with these ideas yet. Interesting! I don't own the NXT motors, thanks for the pointer! All of my projects tend to rely on precise controllers/positioning, so the EV3 motor slop has always been annoying. I believe the NXT motors are compatible with the EV3 brick... I might buy one and try it out.
  6. I just meant the inside of the robot is boring, there's no moving parts. It's just lots and lots of beams to make the robot rigid :) It's not clear to me how to use the plugin. Instead, I wrote up a post describing the trajectory estimation in more detail: http://www.dsharlet.com/2016/09/11/estimating-the-3d-trajectory-of-a-flying-object-from-2d-observations/ Please let me know if any of it is unclear! However, I have to warn you, that without some background in some of these topics (non-linear optimization, computer vision/camera geometry) it might be difficult to understand, I didn't attempt to explain the math that isn't specific to this problem.
  7. Ah, I would not have expected a TeX plugin on a LEGO forum! I'll take a stab at explaining the trajectory estimation this weekend, that's the more interesting and fun problem here. The calibration barely works, and requires a lot more background. Well, this robot does have two eyes, and it still has to do lots of math! It really is pretty interesting to try and understand how humans catch things, while robots have to do incredible amounts of arithmetic to do the same. I read an interesting paper back while I was working on this project that provided a reasonable explanation of how humans catch things, but I can't find it again. The gist was that humans (and animals) are basically doing a similar triangles analysis (unconsciously, of course) constantly while watching the ball, and by moving to keep the triangle the same relative shape (i.e. similar) leads the person to where the ball will land. That's not enough information to reproduce the thought exactly. I clearly didn't understand it well enough to make it stick... maybe one of you can connect the dots from my lame attempt at reproducing the argument... :)
  8. Well, you do have to throw the ball pretty carefully ;) The calibration method is new to me, I'm not sure if it's been used before. The thing is, it's really not a good method :) It's not very reliable and it requires very good initialization, otherwise it won't converge. It's hard to do too, it requires moving the ball slowly for minutes, being careful not to let the ball go out of view of the cameras. The only advantage to this method is that it doesn't need actual images, it works with the object tracking data that NXTcam gives you. It's hard to imagine any realistic system with this limitation. BTW: as much as I complain about NXTcam, it's still a great tool for EV3 and I'm glad I bought them, they just require creative solutions to problems like this :) The trajectory estimation method is also new to me, but I'm sure that it's been done before. It's a very straightforward approach: I just assume the trajectory of the ball is a parabola, and given a trajectory estimate, compute the error between the estimated parabola and the observations in (calibrated) 2D camera space. This is the loss function, when optimized it gives a good estimate of the trajectory. I've been working on a better writeup of this part, I'll try to finish it soon. Doing it properly requires writing down a bit of math, which are hard to do in text. My code is not the most readable, but if you want to look at it, the calibration code is solved here. The header has a pretty large comment describing the calibration procedure. https://github.com/d...n/calibration.h https://github.com/d...bration.cpp#L31 The trajectory estimation and some other related code is here: https://github.com/d.../trajectory.cpp It surely could be implemented in python too, which works on ev3dev! I haven't tried using python on ev3dev myself, but I think it's actually a much more popular choice than C++. The use of pneumatics in this robot is really basic, there's only one valve directly connected to the smaller EV3 servo motor. I used the pneumatics for the grabber because they're very light, which helps the arm move quickly. Here's a better view of the grabber and the control for it: https://goo.gl/photo...7nEFt4ZtnLX52B6 https://goo.gl/photo...Eir42MArHYc25A6 I'm really impressed with EV3, thanks to LEGO for giving us such an awesome system :) I'm a software engineer, I do lots of DSP (on images) for work, no robots though! Best of luck to you, I'm glad I could provide a bit of entertainment :)
  9. By stock controller, I mean the controller that is built in to the EV3 (ev3dev that is...). The custom controller is indeed my own PID controller. It is designed to allow the setpoint to vary without reseting the internal state of the PID. The catching action requires the robot to anticipate where the ball will land and start moving there very quickly. However, even while the arms are still in motion, the robot gets more observations from the cameras, which gives an improved estimate of the trajectory (and where the ball will land). So, we want to change the setpoint for the arms, without resetting the PID controllers so they continue moving smoothly and quickly. My custom controller allows this to work. You can kind of see why this is necessary in one of the failed catches in the video starting at 29 seconds: You can almost see how the initial estimate of the trajectory was pretty far off. It then appears to get a very late improved estimate of where the ball will land that *might* be correct, but it's too late, and so the grabber just hits the ball when it tries to move there, instead of getting under it in time. It all happens in a split second, so it takes a few watches to see it. BTW, if you want to see it, my implementation of the PID controller (and the motor controller that uses it) is here: https://github.com/d...id_controller.h https://github.com/d...c/ev3/servo.cpp Wow, thank you! I'm honored :)
  10. I never attempted to have it catch the ball with the stock controller, so I don't know. Even if it did work, I didn't like running it like that very much, because I'm pretty sure it would destroy the servo motor gears quickly. I also had to strap the whole robot down to avoid it shaking itself off the table. It would also probably ruin the camera calibration quickly too.
  11. It would be nice to use much higher quality cameras, the challenge is latency. NXTcam is pretty nice in that there should be very little latency between the world and its observations. Using a separate camera and/or doing the processing on a computer would enable more powerful techniques, but, I think the added latency would be really problematic. Aside from the latency itself, it would probably also be unpredictable/random latency, which would be hard to deal with. Thanks! Any particular piece you want more info on? I'm happy to go into more detail if you have some specific questions/curiosity about a particular part of the project :) One thing I didn't mention before that might be interesting - this was the first LEGO robot I've worked on that required precisely behaving motor controllers. The stock controllers weren't cutting it, because when you change the target position (setpoint), it resets the internal PID controller state, which leads to very jerky motion. So, I built a custom PID controller that would allow varying the setpoint without reseting the internal state. This means that you can continuously change the setpoint (e.g. to track an objective) without ugly transients. Here is a video comparing the controllers: Here are two more videos that compare the delta robot with and without the custom controllers. Stock controller: Custom controller:
  12. Thanks for the kind words! Here's the answers to some of the questions: I worked on this for a few months. I probably spent more time on the software than the robot itself (lots of time waiting for parts from bricklink...). Yes, all of the processing is done on the EV3! Even the camera calibration, which in hindsight, was probably a mistake (it takes ~10-60 seconds to solve that problem, depending on how much calibration data you give it). The EV3 hardware doesn't have floating point, which is strictly necessary for the numerical techniques I'm using, and I wasn't sure the EV3 would be fast enough. But, I've find the EV3 hardware to be quite powerful. It can probably do anything that you might want a LEGO robot to do, with the exception of processing large amounts of data (like images, but that's done on the NXTcams on a dedicated processor). Regarding the quality of throws/camera tracking, there's a few issues to deal with: - The cameras have a really narrow field of view, so the region in which the ball is visible to both cameras is relatively small. - The cameras also can only see the ball ~8 feet (2-3m) away. The more light there is, the further away they can see it (thus the lamps attached to the table, and the funky lighting in my video). - The robot can only reach a relatively small area (~50 studs horizontally x ~40 studs vertically). The success rate (1 in 5) is already excluding throws that the robot can't be expected to catch due to some of these issues. I think the remaining issues are: - The NXTcams only run at 30 fps, and the tracking is quite crude. You only get a bounding box for a particular color, and the NXTcam color matching is done in a poor way too. I was prepared to attempt to reprogram the NXTcam firmware to do better object tracking: use YUV instead of RGB (should make the tracking less sensitive to lighting), and compute a centroid instead of a bounding box (should give sub-pixel precision of the ball position). But, it proved to be just *barely* able to track the balls well enough :) - There's quite a lot of slop in the EV3 servo motors. I attached the arms directly to the motors to try to avoid this slop, and didn't gear them down to reduce the error because I needed the arms to move as quickly as possible. - Despite trying pretty hard to make everything rigid, the arm is still pretty flexible, which leads to errors in positioning. Any particular parts you want to see? The inside of the robot is actually quite boring, because there are very few moving parts aside from the arms. There's not a single gear train in this entire robot! The whole structure is mostly just massive overbuilding to make the base as stiff as possible. BTW, the link could be quite easy to miss next to the embedded youtube video, so here is a link to some still photos again: https://goo.gl/photos/EYkhrsioVxZbGw84A
  13. Hi all, this is a project I worked on ~1.5 years ago, but I'm just now finally getting around to writing up some blurbs about how some of it works. I had been thinking for *ages* (even since RCX 1.0) how interesting and cool it would be to build a LEGO robot to catch a ball in mid-air. Obviously, the parts we got from LEGO at that time were not remotely good enough to attempt this (and I did not know some of the key things necessary to make this work). But, by the time of EV3, I thought maybe it was possible, so I gave it a try, with some success. Here are some more pictures: https://goo.gl/photo...khrsioVxZbGw84A and a video if it in action: Some quick details: The robot uses two NXTcam cameras (http://www.mindsenso...page&PAGE_id=78) to track the ball in stereo, which gives a 3D estimate of the trajectory of the ball. Here is a video from when it was WIP, testing just the stereo tracking: The robot is programmed using ev3dev (www.ev3dev.org), written in C++, the code is available on github: https://github.com/dsharlet/DeltaCatch The robot will catch ~1/5 (reachable) throws, the robot can reach an area with a radius of ~25 studs, and it can reach up ~40 studs. A bit more information on the design and how it works: In order to catch a ball, a robot needs to be able to move very quickly and accurately. This is why I chose a delta robot design, and (over-)built it to be very stiff. This is also the motivation for using pneumatics to control the hand of the robot: the pneumatics have very low mass and move quickly. All of the heavier parts controlling the hand are located on the base of the robot where mass is less of a concern. The robot has a range of about 50 LEGO studs (about 40 cm) horizontally, and it can reach about 40 studs above the top of the platform. Most of the hard problems in getting this to work are in the software for tracking the ball and planning motions for executing a catch. The biggest challenge to get this to work was the camera calibration. Calibration is the process of determining the camera parameters that define how positions in the world are mapped to pixel coordinates on the sensor of a camera. Calibrating NXTcam (or similar object detecting cameras) is very difficult because this type of camera does not provide images, only object tracking data. This means the standard calibration tools and packages that use known relative feature positions cannot be easily used. In addition to this, NXTcam is very low resolution. Ideally, camera calibration would be done with subpixel accurate feature extraction, especially for such a low resolution device like NXTcam. My solution for the calibration problem is to use a different constraint. I use object tracking data from a ball tied to one end of a string, and the other end of the string is fixed to some known position relative to the robot. Moving the ball in the stereo field of view while keeping the string taught gives tracking observations of an object known to lie on a sphere. The calibration is then formulated as an optimization problem to find the camera parameters that minimize the distance from the surface of the sphere to the estimated 3D position of the ball. This calibration procedure is finicky and took me several attempts to get a good calibration. It helps to use observations from several different spheres, the more the better. It's also important to use inelastic string, such as kite string. Once the cameras are calibrated, the next problem is estimating the trajectory of a ball flying through the air. This is difficult because the NXTcam frames are not synchronized, so we cannot assume that the object detected by two NXTcams is at the same 3D position. This means that the most obvious approach of computing the 3D position of the ball at each pair of camera observations and fitting a trajectory to these positions is not viable. To work around this, I set up trajectory estimation as an optimization problem where the objective function is the reprojection error (https://en.wikipedia...rojection_error) between the current estimate of the trajectory sampled at the observation time and the observations from the cameras. This formulation allows for a new variable representing the unknown time shift between the two sets of observations. Despite lacking floating point hardware, the EV3 processor can solve this nonlinear optimization problem in 40-80 ms (depending on initialization quality), which is fast enough to give the robot enough time to move to where the trajectory meets the robot. The robot is programmed to begin moving to where the trajectory is expected to intersect the region reachable by the robot as soon as it has the first estimate of the trajectory. As more camera observations take place, the estimated trajectory can be improved, so the robot continues moving to the expected intersection of the refined trajectory and the reachable area. To increase the window of time for catching the ball, the robot attempts to match the trajectory of the ball while closing the "hand" as the ball arrives. This reduces the risk of the ball bouncing out of the hand while attempting to catch. Anyways, that's probably too much detail to start with, let me know if you want to know more.
×
×
  • Create New...