Skip to content
Snippets Groups Projects

Edge detection

Merged Aaron Councilman requested to merge juno-edge-detection into main
  • Juno and rust versions and a harness including verification and the ability to display frames or output the entire videos; we'll probably need to tweak the verification for GPU like with Cava.
  • Uses opencv for reading/writing videos and displaying frames. TBH the opencv bindings for Rust are pretty bad, they're pretty close to the C++ ones. I use unsafe in one place to get the frame as a slice.
  • Adds an opencv feature, without it edge detection is disabled but it means that we don't have a dependence on having the opencv system package installed (libopencv-dev on Debian/Ubuntu). The feature is enabled for CI, where we run just 2 frames of a video to avoid increasing CI time too much.

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
182 let input = unsafe { from_raw_parts(ptr, height * width) };
183
184 let input_h = HerculesCPURef::from_slice(input);
185
186 let result = async_std::task::block_on(async {
187 r.run(
188 height as u64,
189 width as u64,
190 gs as u64,
191 sz as u64,
192 sb as u64,
193 input_h,
194 gaussian_filter_h.clone(),
195 structure_h.clone(),
196 sx_h.clone(),
197 sy_h.clone(),
  • Comment on lines +194 to +197

    To pass the same values to multiple executions I have to clone the HerculesCPURefs. It seems like their clones should be cheap, so this is fine, though especially if they are cheap I wonder if we could implement Copy to avoid having to explicitly clone.

  • yeah, think of it like cloning a &T. i suppose we could implement copy for it like &T.

  • Please register or sign in to reply
  • I have to point out that

    1. The Juno implementation is shorter than the Rust one
    2. In total, the Juno, Rust, and harness are < 1000 lines and the harness has significantly more features than the HPVM version which is over 1600 lines
  • Awesome work - this looks good, I think I'll merge this in after I merge the GPU stuff in.

  • rarbore2 approved this merge request

    approved this merge request

  • Merged in GPU, I'll be able to merge this in myself tomorrow morning at the earliest.

  • added 3 commits

    Compare with previous version

  • added 1 commit

    • 6b423261 - Add GPU schedule to edge detection

    Compare with previous version

  • added 1 commit

    • dee7b0f5 - Panic if building cuda runtime fails

    Compare with previous version

  • added 1 commit

    • 147bc34d - Add cuda feature to edge detection

    Compare with previous version

  • added 1 commit

    Compare with previous version

  • added 1 commit

    Compare with previous version

  • added 1 commit

    • b1a3e3bc - Fixing code via CI is annoying

    Compare with previous version

    • Seems to be running on GPU now. I'm kinda surprised though that on GPU it manages to satisfy an assert_eq! especially with the issues we had with that on Cava.

    • Huh - might just come down to a specific operator used in cava not used in edge that violates IEEE754 extra hard?

    • That's possible. Cava performs integer to float (and vice-versa) conversions in its kernels, while edge detection does not. I wonder if that could be the issue

    • Please register or sign in to reply
    • I'm ok merging this in w/o forkify btw, since cava on main also doesn't have forkify enabled currently for CI. That being said, getting edge and cava working w/ forkify is high priority and I'll be working on that shortly.

    • Sounds good. I haven't tested this with forkify actually, so I'll test forkify and then merge whatever works

    • Please register or sign in to reply
  • added 1 commit

    Compare with previous version

  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Please register or sign in to reply
    Loading