Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
ECE470
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Deploy
Releases
Model registry
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
siyunhe2
ECE470
Commits
7ce946e7
Commit
7ce946e7
authored
7 years ago
by
siyunhe2
Browse files
Options
Downloads
Patches
Plain Diff
Update README.md
parent
460f4407
No related branches found
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
FinalProject/Check_Point_5/README.md
+20
-54
20 additions, 54 deletions
FinalProject/Check_Point_5/README.md
with
20 additions
and
54 deletions
FinalProject/Check_Point_5/README.md
+
20
−
54
View file @
7ce946e7
## Checkpoint 4 (due 4/2/18)
### Demonstrate Collision Detection for UR3
## Checkpoint 5 (due 4/9/18)
First, make sure you have Remote API file and related Python files in the same folder as Checkpoint4.py.
### Demonstrate Motion/Path Planing for UR3
You should also load the appropriate scene in the V-REP. The code does not ask for any user input.
In this checkpoint, the user would be to asked to input desired position of the UR3.
We predefined some of the theta values, which can be looked up in the code. Based on these values,
the collision checker is ran. You can checkout the code in Collision Detection section.
Before the robot moves, the collision checker checks if the collision will occur and will print the
First, make sure you have Remote API file and related Python files in the same folder as Checkpoint5.py.
appropriate comment to the screen. Then it will move the joints. During the first "iteration",
You should also load the appropriate scene in the V-REP.
V-REP is not running the simulation, therefore, the joints are allowed to move throught the obstacles.
During the second "iteration", V-REP is in simulation mode, therefore the joints will not be
able to move throught the obstacles no more. For both iterations, same joint configuration was used.
In the Scene, there are two blocks which we recognize it as obstacles. In order to detect the obstacles, we implemented
boudning sphere on the two blocks.
So the goal is to to reach to the desired pose as given the user input.
### Collision Detection
### Sampling based planner Algorithm
The implementation of collision detection:
~~~~
def collisionChecker(S, initial_points, points, r, theta):
for i in range(len(theta)):
p_initial = np.array([
[initial_points[0][i]],
[initial_points[1][i]],
[initial_points[2][i]],
[1]
])
temp = np.identity(4)
1)The path planing is implemented by using sampling based planner Algorithm. We set a tree strcuture to store all the intermediate path
to connect betwwen the starting position and goal position. The tree structure has two trees: T(forward) and T(bacward), which served as
"bisection method"
for j in range(i+1):
First, we check the whether the UR3 robot would encounter with self-Collision or obstacles-collision.
temp = np.dot(temp, expm(screwBracForm(S[j]) * theta[j]))
p_final = np.dot(temp, p_initial)
2)Then we sample random thetas from random uniform distribution
points[0].append(p_final[0][0])
3)In order to genearte the one goal pose, we check whether the UR3 robot would encounter with
points[1].append(p_final[1][0])
self-Collision(its self joints) or obstacles-collision(bounding spheres on the two blocks).
points[2].append(p_final[2][0])
p = np.block([
4)if no collision, add the current position/path to T(forward), and iteratetively found new thetas
[np.reshape(np.asarray(points[0]),(1,len(theta)+2))],
in T(forward), while set the new thetas as the child of previous theta
[np.reshape(np.asarray(points[1]),(1,len(theta)+2))],
[np.reshape(np.asarray(points[2]),(1,len(theta)+2))]
])
for i in range(len(theta) + 2):
5)And do the same for T(backward)
point1 = np.array([
[p[0][i]],
[p[1][i]],
[p[2][i]]
])
for j in range(len(theta) + 2 - i):
6)If theta belongs to both T(forward) and T(backward), then we have a complete connected path from starting position to desired goal pose.
if(i == i+j):
continue
point2 = np.array([
[p[0][j+i]],
[p[1][j+i]],
[p[2][j+i]]
])
if(norm(point1 - point2) <= r*2):
return 1
return 0
~~~~
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment