Tags: AMD Radeon Instinct MI100, ATI, Bioinformatics, Biology, CUDA, Next-Generation sequencing, nVidia, nVidia GeForce RTX 3090, OpenCL, Package, Sequence alignment, Tesla A100
Tags: Biology, Computer science, Java, Next-Generation sequencing, nVidia, OpenCL, Package, R, Tesla K20
Tags: Algorithms, Bayesian, Bioinformatics, Biology, Filtering, Next-Generation sequencing, nVidia, nVidia GeForce GTX 580, nVidia GeForce GTX 780, OpenCL, Package, Thesis
Tags: Algorithms, Biology, CUDA, Databases, Next-Generation sequencing, nVidia, nVidia GeForce GTX 690, nVidia GeForce GTX 780, Package, Smith-Waterman algorithm
Tags: Bioinformatics, Biology, Computer science, CUDA, Next-Generation sequencing, nVidia, nVidia GeForce GTX 770
Tags: Bioinformatics, Biology, CUDA, Next-Generation sequencing, nVidia, Sequence alignment, Tesla K20
Tags: Algorithms, Bioinformatics, Biology, CUDA, Next-Generation sequencing, nVidia, Package, Tesla C2070, Tesla M2050
Tags: Algorithms, Bioinformatics, Biology, CUDA, Next-Generation sequencing, nVidia, nVidia GeForce GTX 480, Package, Python, Smith-Waterman algorithm
Tags: Algorithms, Biology, CUDA, Databases, Next-Generation sequencing, nVidia, Package, Tesla S1070
Tags: Bioinformatics, Biology, CUDA, Next-Generation sequencing, nVidia, nVidia GeForce GTX 280, Package
Tags: Bioinformatics, Biology, CUDA, MPI, Next-Generation sequencing, nVidia, OpenMP, Package, Tesla S1070
Recent source codes
Most viewed papers (last 30 days)
- Acceleration as a Service (XaaS) Source Containers
- Exploring SYCL as a Portability Layer for High-Performance Computing on CPUs
- All You Need Is Binary Search! A Practical View on Lightweight Database Indexing on GPUs
- CUDA-LLM: LLMs Can Write Efficient CUDA Kernels
- Engineering Supercomputing Platforms for Biomolecular Applications
- chemtrain-deploy: A parallel and scalable framework for machine learning potentials in million-atom MD simulations
- LiteGD: Lightweight and dynamic GPU Dispatching for Large-scale Heterogeneous Clusters
- A First Look at Bugs in LLM Inference Engines
- MemAscend: System Memory Optimization for SSD-Offloaded LLM Fine-Tuning
- HPCTransCompile: An AI Compiler Generated Dataset for High-Performance CUDA Transpilation and LLM Preliminary Exploration