Neural Network Distillation
Published:
Overview of neural network distillation as done in “Distilling the Knowledge in a Neural Network” (Hinton et al, 2014).
Published:
Overview of neural network distillation as done in “Distilling the Knowledge in a Neural Network” (Hinton et al, 2014).
Published:
Some notes on how transformer-decoder language models work, taking GPT-2 as an example, and with lots references in order to dig deeper.
Give me your prompt, would you kindly?
Published:
Extracting the system prompt from GitHub CoPilot.
Published:
Executing the Stable Diffusion text-to-image model on an AMD Ryzen 5 5600G integrated GPU (iGPU).
Page 1 of 1 | Previous page | Next page | JSON Feed | Atom Feed