Augmented Reality: Difference between revisions
Jump to navigation
Jump to search
(Added some more information) |
(Added some more information) |
||
Line 16: | Line 16: | ||
*Uses [[Object Recognition]] to see where the "parts" and "tools" are, and then calculate + show what needs to be done in the current "step" | *Uses [[Object Recognition]] to see where the "parts" and "tools" are, and then calculate + show what needs to be done in the current "step" | ||
*May be hardware intensive, but is doable, especially with [AR/MR/VR Display Tethering]] | *May be hardware intensive, but is doable, especially with [AR/MR/VR Display Tethering]] | ||
===AR CAD=== | |||
*[[Open Source AR CAD Software]] | |||
*Essentially fiddling around with your hands, but it goes to cad | |||
*May allow for multiple people, if using a very connected work enviroment, or a LARGE pc/server capable of many virtual clients | |||
=Use Case for Build Instructionals using Markers= | =Use Case for Build Instructionals using Markers= |
Revision as of 20:19, 7 April 2020
Basics
- Often abbreviated as AR
- A "layering" of digital data similar to that used in VR or MR, but layered directly over real life
- Thus a semi-transparent "screen" is used
- Can use inside out, or outside in tracking, although inside out is most common due to the highly mobile nature of most AR applications
- Most common tracking methods are
- Marker Based Inside Out Tracking
- Computer Vision Based Inside Out Tracking
- SteamVR Tracking
- Optitrack
- Accelorometer-Compass-Gyroscope Based Dead Reconing
Main OSE Use Cases
Visually guided assemblies
- Can use a headset or some sort of handheld display
- Uses Object Recognition to see where the "parts" and "tools" are, and then calculate + show what needs to be done in the current "step"
- May be hardware intensive, but is doable, especially with [AR/MR/VR Display Tethering]]
AR CAD
- Open Source AR CAD Software
- Essentially fiddling around with your hands, but it goes to cad
- May allow for multiple people, if using a very connected work enviroment, or a LARGE pc/server capable of many virtual clients
Use Case for Build Instructionals using Markers
- FLOSS using https://www.learnopencv.com/augmented-reality-using-aruco-markers-in-opencv-c-python/ - Example using simple markers (ArUco) markers - with Python. When you see an icon, app replaces image with another image to augment information of image. OSE Use Case: building a 3D printer, aruco marker is attached to a part, and a video tells you how to build that part. This way, just with an app and marked parts - you can build an entire thing with 'self-generated' instructions. The savings here come from not needing to identify how a part goes together by looking at documentation. This requires you to (1) find and identify part; (2) follow instructions on that part. Challenges: identifying a part from many parts can be tricky if you have to dig through a bunch of parts. Following instructions can be cumbersome. Solutions with AR: part is identified automatically (pending marker). Quick on-demand, repeating instructions can be shown automatically, without you going through pages or hitting play for a video.
- Overall SWOT: good to identify parts, but you still have to put on the labels. If labels are done automatically - such as by image recognition, not marker - then we are set. Threat: cumbersome to learn unless there is a clear instructional. Also, small parts such as small screws - it's not easy to label them. Conclusion: Image Recognition + AR is the solution. *Image Recognition*
Links
- OS AR based on markers - https://www.openspace3d.com/softwarelogiciel/