Tags: Computer science, CUDA, Metaheuristics, nVidia, nVidia GeForce GTX 1060, Package, Security, Tutorial
Tags: Computer science, Computer vision, Evolutionary Computations, Image registration, Metaheuristics, Neural and Evolutionary Computing, Overview
Tags: Algorithms, Artificial intelligence, Benchmarking, Computer science, CUDA, Metaheuristics, nVidia, nVidia GeForce GTS 780 Ti
Tags: Computer science, Metaheuristics, Neural and Evolutionary Computing, nVidia, nVidia GeForce GTX 960, nVidia GeForce GTX 980, OpenACC, OpenMP, Optimization
Tags: Ant colony optimization, Computer science, CUDA, Metaheuristics, nVidia, nVidia GeForce GTX 680
Tags: Algorithms, Clustering, Computer science, CUDA, Metaheuristics, nVidia, Tesla K40
Tags: Algorithms, Cellular automata, Metaheuristics, nVidia, nVidia GeForce GTX 660 Ti, OpenCL, Package, Signal processing, Thesis
Tags: Algorithms, Computer science, Metaheuristics, nVidia, nVidia GeForce GT 525 M, OpenCL, Optimization, Search
Tags: Algorithms, Biology, Computational biology, Computer science, Heterogeneous systems, Metaheuristics, nVidia, nVidia GeForce GTX 560 Ti, OpenCL, Optimization, Pattern recognition, Thesis
Tags: Computer science, CUDA, Metaheuristics, nVidia, nVidia GeForce GT 630 M, nVidia GeForce GTS 450, Pattern recognition, Thesis
Recent source codes
Most viewed papers (last 30 days)
- Over-synchronization in GPU Programs
- PyOMP: Parallel programming for CPUs and GPUs with OpenMP and Python
- LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators
- A Distributed-memory Tridiagonal Solver Based on a Specialised Data Structure Optimised for CPU and GPU Architectures
- SoK: A Systems Perspective on Compound AI Threats and Countermeasures
- Profile Util library: A quick and easy way to get MPI, OpenMP and GPU runtime information
- On a Simplified Approach to Achieve Parallel Performance and Portability Across CPU and GPU Architectures
- Context Parallelism for Scalable Million-Token Inference
- NEO: Saving GPU Memory Crisis with CPU Offloading for Online LLM Inference
- Edify 3D: Scalable High-Quality 3D Asset Generation