Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
H
Hercules
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
llvm
Hercules
Merge requests
!214
More optimizations
Code
Review changes
Check out branch
Download
Patches
Plain diff
Merged
More optimizations
more_opt3
into
main
Overview
0
Commits
21
Pipelines
1
Changes
1
Merged
rarbore2
requested to merge
more_opt3
into
main
2 weeks ago
Overview
0
Commits
21
Pipelines
1
Changes
1
Expand
A lot of misc. optimization work, notably parallelization of Rodinia CPU schedules.
Support >32 tiles in GPU backend.
Tweak some transformations.
0
0
Merge request reports
Viewing commit
d303b185
Prev
Next
Show latest version
1 file
+
6
−
5
Inline
Compare changes
Side-by-side
Inline
Show whitespace changes
Show one file at a time
d303b185
fix syncthreads hack
· d303b185
rarbore2
authored
2 weeks ago
hercules_cg/src/gpu.rs
+
6
−
5
Options
@@ -562,8 +562,9 @@ namespace cg = cooperative_groups;
* and writes.
*/
fn
codegen_helpers
(
&
self
,
w
:
&
mut
String
)
->
Result
<
(),
Error
>
{
write!
(
w
,
"
\t
__shared__ cg::experimental::block_tile_memory<1024> block_sync_shared;
\n
"
)
?
;
write!
(
w
,
"
\t
cg::grid_group grid = cg::this_grid();
\n
"
)
?
;
write!
(
w
,
"
\t
cg::thread_block block = cg::this_thread_block();
\n
"
)
?
;
write!
(
w
,
"
\t
cg::thread_block block = cg::
experimental::
this_thread_block(
block_sync_shared
);
\n
"
)
?
;
Ok
(())
}
@@ -1322,8 +1323,8 @@ namespace cg = cooperative_groups;
)
?
;
write!
(
w
,
"{}
\t
*({} + i) = 0;
\n
"
,
tabs
,
define_variable
)
?
;
write!
(
w
,
"{}}}
\n
"
,
tabs
)
?
;
//
write!(w, "{}{}.sync();\n", tabs, cg_tile)?;
write!
(
w
,
"__syncthreads
\n
"
)
?
;
write!
(
w
,
"{}{}.sync();
\n
"
,
tabs
,
cg_tile
)
?
;
//
write!(w, "__syncthreads\n")?;
}
}
// Dynamic constants emitted at top
@@ -1783,8 +1784,8 @@ namespace cg = cooperative_groups;
}
let
fork
=
self
.join_fork_map
.get
(
&
id
)
.unwrap
();
let
cg_tile_available
=
self
.get_cg_tile
(
*
fork
,
CGType
::
Available
);
//
write!(w_term, "\t{}.sync();\n", cg_tile_available)?;
write!
(
w_term
,
"
\t
__syncthreads;
\n
"
)
?
;
write!
(
w_term
,
"
\t
{}.sync();
\n
"
,
cg_tile_available
)
?
;
//
write!(w_term, "\t__syncthreads;\n")?;
}
// If the Fork was parallelized, each thread or UsedPerId tile of
// threads only runs one ThreadID, so we can jump straight to the
Loading