FLUX.1 Kontext got SUPERCHARGED! @NVIDIA_AI_PC TensorRT acceleration delivers 2x faster inference on RTX GPUs. Quantization cuts memory from 24GB to 7GB (FP4) while maintaining quality. Production-ready BF16/FP8/FP4 variants now on @huggingface