Date: | Mon, 11 Nov 2019 14:18:04 +0000 |
---|---|
From: | Aws Albarghouthi <aws@xxxxxxxxxxx> |
Subject: | [pl-seminar] Fwd: Seminar of potential interest to Multifacet (this coming monday @4pm) |
---------- Forwarded message ---------
From: EFTYCHIOS SIFAKIS <sifakis@xxxxxxxxxxx> Date: Fri, Nov 8, 2019 at 1:02 PM Subject: Seminar of potential interest to Multifacet (this coming monday @4pm) To: multifacet@xxxxxxxxxxx <multifacet@xxxxxxxxxxx>, Matt Sinclair <msinclair@xxxxxxxx>, Aws Albarghouthi <aws@xxxxxxxxxxx>, LORIS D'ANTONI <ldantoni@xxxxxxxx>, Guri Sohi <sohi@xxxxxxxxxxx>
Dear all,
I'm hosting a "Graphics" Seminar by Yuanming Hu (PhD student @ MIT) on Monday, on a topic that I suspect might be of interest.
Yuanming (and his co-authors, which were also the folks behind Halide) has been working on a Domain Specific Language for operations on Sparse, grid-embedded data.
Given the focus on performance optimizations for GPU and CPU targets, and the relation to "sparse convolutions" and other catchy topics in ML/DeepLearning these days, I strongly suspect this to be quite significant in its impact.
Please feel free to attend, or ask your students if they might be interested.
TITLE:
Taichi: A Language for High-Performance Numerical Simulation and Differentiable Programming on Sparse Data Structures
Date: Monday, November 11, 2019
Time: 4-5 p.m.
Abstract:
3D volumetric data are often spatially sparse. To exploit such sparsity, people have developed hierarchical sparse voxel data structures such as SPGrid and VDB. However, developing and using these high-performance sparse data structures is challenging, due
to the intrinsic data structure complexity and overhead. We propose Taichi, a new data-oriented programming language for efficiently authoring, accessing, and maintaining such data structures. The language offers a high-level, data structure-agnostic interface
for writing computation code. The user independently specifies the data structure. We provide several elementary components with different sparsity properties that can be arbitrarily composed to create a wide range of multi-level sparse data structures. This
decoupling of data structures from computation makes it easy to experiment with different data structures without changing computation code, and allows users to write computation as if they are working with a dense array. Our compiler then uses the semantics
of the data structure and index analysis to automatically optimize for locality, remove redundant operations for coherent accesses, maintain sparsity and memory allocations, and generate efficient parallel and vectorized instructions for CPUs and GPUs. With
1/10th as many lines of code, we achieve 4.55Ã higher performance on average, compared to hand-optimized reference implementations. We have also developed a differentiable programming extension to this language. ------------------------------------------------------------ Aws
|
[← Prev in Thread] | Current Thread | [Next in Thread→] |
---|---|---|
|
Previous by Date: | [pl-seminar] No PL seminar today, JOHN CYPHERT |
---|---|
Next by Date: | [pl-seminar] PL Seminar, JOHN CYPHERT |
Previous by Thread: | [pl-seminar] Fwd: Potential program analysis candidates, Loris D'Antoni |
Next by Thread: | [pl-seminar] Fwd: Summer Internships at Bell Labs in Verification, Synthesis and Robotics, Aws Albarghouthi |
Indexes: | [Date] [Thread] |