<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>HPC Posts Archive | Puget Systems</title>
	<atom:link href="https://www.pugetsystems.com/all-hpc/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.pugetsystems.com/all-hpc/</link>
	<description>Workstations for creators.</description>
	<lastBuildDate>Fri, 16 Feb 2024 18:06:54 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.4.3</generator>

 
	<item>
		<title>Benchmarking with TensorRT-LLM</title>
		<link>https://www.pugetsystems.com/labs/hpc/benchmarking-with-tensorrt-llm/</link>
					<comments>https://www.pugetsystems.com/labs/hpc/benchmarking-with-tensorrt-llm/#respond</comments>
		
		<dc:creator><![CDATA[Jon Allman]]></dc:creator>
		<pubDate>Fri, 16 Feb 2024 18:06:51 +0000</pubDate>
				<guid isPermaLink="false">https://www.pugetsystems.com/?post_type=hpc_post&#038;p=23187</guid>

					<description><![CDATA[<p>Evaluating the speed of GeForce RTX 40-Series GPUs using NVIDIA's TensorRT-LLM tool for benchmarking GPU inference performance.</p>
<p>The post <a href="https://www.pugetsystems.com/labs/hpc/benchmarking-with-tensorrt-llm/">Benchmarking with TensorRT-LLM</a> appeared first on <a href="https://www.pugetsystems.com">Puget Systems</a>.</p>
]]></description>
		
					<wfw:commentRss>https://www.pugetsystems.com/labs/hpc/benchmarking-with-tensorrt-llm/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Experiences with Multi-GPU Stable Diffusion Training</title>
		<link>https://www.pugetsystems.com/labs/hpc/multi-gpu-sd-training/</link>
					<comments>https://www.pugetsystems.com/labs/hpc/multi-gpu-sd-training/#respond</comments>
		
		<dc:creator><![CDATA[Jon Allman]]></dc:creator>
		<pubDate>Mon, 29 Jan 2024 23:22:19 +0000</pubDate>
				<guid isPermaLink="false">https://www.pugetsystems.com/?post_type=hpc_post&#038;p=22714</guid>

					<description><![CDATA[<p>Results and thoughts with regard to testing a variety of Stable Diffusion training methods using multiple GPUs.</p>
<p>The post <a href="https://www.pugetsystems.com/labs/hpc/multi-gpu-sd-training/">Experiences with Multi-GPU Stable Diffusion Training</a> appeared first on <a href="https://www.pugetsystems.com">Puget Systems</a>.</p>
]]></description>
		
					<wfw:commentRss>https://www.pugetsystems.com/labs/hpc/multi-gpu-sd-training/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>LLM Server Setup Part 2 &#8212; Container Tools</title>
		<link>https://www.pugetsystems.com/labs/hpc/llm-server-setup-part-2-container-tools/</link>
					<comments>https://www.pugetsystems.com/labs/hpc/llm-server-setup-part-2-container-tools/#respond</comments>
		
		<dc:creator><![CDATA[Dr. Donald Kinghorn]]></dc:creator>
		<pubDate>Mon, 20 Nov 2023 22:15:10 +0000</pubDate>
				<guid isPermaLink="false">https://www.pugetsystems.com/?post_type=hpc_post&#038;p=21032</guid>

					<description><![CDATA[<p>This post is Part 2 in a series on how to configure a system for LLM deployments and development usage.  Part 2 is about installing and configuring container tools, Docker and NVIDIA Enroot.</p>
<p>The post <a href="https://www.pugetsystems.com/labs/hpc/llm-server-setup-part-2-container-tools/">LLM Server Setup Part 2 &#8212; Container Tools</a> appeared first on <a href="https://www.pugetsystems.com">Puget Systems</a>.</p>
]]></description>
		
					<wfw:commentRss>https://www.pugetsystems.com/labs/hpc/llm-server-setup-part-2-container-tools/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>LLM Server Setup Part 1 &#8211; Base OS</title>
		<link>https://www.pugetsystems.com/labs/hpc/llm-server-setup-part-1-base-os/</link>
					<comments>https://www.pugetsystems.com/labs/hpc/llm-server-setup-part-1-base-os/#respond</comments>
		
		<dc:creator><![CDATA[Dr. Donald Kinghorn]]></dc:creator>
		<pubDate>Wed, 15 Nov 2023 23:42:42 +0000</pubDate>
				<guid isPermaLink="false">https://www.pugetsystems.com/?post_type=hpc_post&#038;p=20669</guid>

					<description><![CDATA[<p>This post is Part 1 in a series on how to configure a system for LLM deployments and development usage. The configuration will be suitable for multi-user deployments and also useful for smaller development systems. Part 1 is about the base Linux server setup.</p>
<p>The post <a href="https://www.pugetsystems.com/labs/hpc/llm-server-setup-part-1-base-os/">LLM Server Setup Part 1 &#8211; Base OS</a> appeared first on <a href="https://www.pugetsystems.com">Puget Systems</a>.</p>
]]></description>
		
					<wfw:commentRss>https://www.pugetsystems.com/labs/hpc/llm-server-setup-part-1-base-os/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Can You Run A State-Of-The-Art LLM On-Prem For A Reasonable Cost?</title>
		<link>https://www.pugetsystems.com/labs/hpc/can-you-run-a-state-of-the-art-llm-on-prem-for-a-reasonable-cost/</link>
					<comments>https://www.pugetsystems.com/labs/hpc/can-you-run-a-state-of-the-art-llm-on-prem-for-a-reasonable-cost/#respond</comments>
		
		<dc:creator><![CDATA[Dr. Donald Kinghorn]]></dc:creator>
		<pubDate>Mon, 17 Jul 2023 23:44:37 +0000</pubDate>
				<guid isPermaLink="false">https://www.pugetsystems.com/?post_type=hpc_post&#038;p=15672</guid>

					<description><![CDATA[<p>In this post address the question that's been on everyone's mind; Can you run a state-of-the-art Large Language Model on-prem? With *your* data and *your* hardware? At a reasonable cost?</p>
<p>The post <a href="https://www.pugetsystems.com/labs/hpc/can-you-run-a-state-of-the-art-llm-on-prem-for-a-reasonable-cost/">Can You Run A State-Of-The-Art LLM On-Prem For A Reasonable Cost?</a> appeared first on <a href="https://www.pugetsystems.com">Puget Systems</a>.</p>
]]></description>
		
					<wfw:commentRss>https://www.pugetsystems.com/labs/hpc/can-you-run-a-state-of-the-art-llm-on-prem-for-a-reasonable-cost/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Note: How To Setup Apache on Ubuntu 22.04 For User public_html</title>
		<link>https://www.pugetsystems.com/labs/hpc/note-how-to-setup-apache-on-ubuntu-22-04-for-user-public_html/</link>
					<comments>https://www.pugetsystems.com/labs/hpc/note-how-to-setup-apache-on-ubuntu-22-04-for-user-public_html/#respond</comments>
		
		<dc:creator><![CDATA[Dr. Donald Kinghorn]]></dc:creator>
		<pubDate>Fri, 02 Jun 2023 15:37:23 +0000</pubDate>
				<guid isPermaLink="false">https://www.pugetsystems.com/?post_type=hpc_post&#038;p=14961</guid>

					<description><![CDATA[<p>This is a short note on setting up the Apache web server to allow system users to create personal websites and web apps in their home directories.  </p>
<p>The post <a href="https://www.pugetsystems.com/labs/hpc/note-how-to-setup-apache-on-ubuntu-22-04-for-user-public_html/">Note: How To Setup Apache on Ubuntu 22.04 For User public_html</a> appeared first on <a href="https://www.pugetsystems.com">Puget Systems</a>.</p>
]]></description>
		
					<wfw:commentRss>https://www.pugetsystems.com/labs/hpc/note-how-to-setup-apache-on-ubuntu-22-04-for-user-public_html/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>GTC23  Notes And Selected Sessions</title>
		<link>https://www.pugetsystems.com/labs/hpc/gtc23-notes-and-selected-sessions/</link>
					<comments>https://www.pugetsystems.com/labs/hpc/gtc23-notes-and-selected-sessions/#respond</comments>
		
		<dc:creator><![CDATA[Dr. Donald Kinghorn]]></dc:creator>
		<pubDate>Wed, 12 Apr 2023 15:27:46 +0000</pubDate>
				<guid isPermaLink="false">https://www.pugetsystems.com/?post_type=hpc_post&#038;p=14289</guid>

					<description><![CDATA[<p>NVIDIA GTC 2023 was outstanding! To say that about a virtual conference tells you how much I value it. This post is largely a catalog of the talks I found interesting along with titles that I think will be interesting to a larger audience and my colleagues at Puget Systems.</p>
<p>The post <a href="https://www.pugetsystems.com/labs/hpc/gtc23-notes-and-selected-sessions/">GTC23  Notes And Selected Sessions</a> appeared first on <a href="https://www.pugetsystems.com">Puget Systems</a>.</p>
]]></description>
		
					<wfw:commentRss>https://www.pugetsystems.com/labs/hpc/gtc23-notes-and-selected-sessions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How To Use Linux Kernel Boot Options</title>
		<link>https://www.pugetsystems.com/labs/hpc/how-to-use-linux-kernel-boot-options/</link>
					<comments>https://www.pugetsystems.com/labs/hpc/how-to-use-linux-kernel-boot-options/#respond</comments>
		
		<dc:creator><![CDATA[Dr. Donald Kinghorn]]></dc:creator>
		<pubDate>Mon, 13 Mar 2023 23:05:06 +0000</pubDate>
				<guid isPermaLink="false">https://www.pugetsystems.com/?post_type=hpc_post&#038;p=14011</guid>

					<description><![CDATA[<p>This post is a short HowTo on passing Linux kernel boot options during OS installation and persisting them for future system starts</p>
<p>The post <a href="https://www.pugetsystems.com/labs/hpc/how-to-use-linux-kernel-boot-options/">How To Use Linux Kernel Boot Options</a> appeared first on <a href="https://www.pugetsystems.com">Puget Systems</a>.</p>
]]></description>
		
					<wfw:commentRss>https://www.pugetsystems.com/labs/hpc/how-to-use-linux-kernel-boot-options/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Problems With RTX4090 MultiGPU and AMD vs Intel vs RTX6000Ada or RTX3090</title>
		<link>https://www.pugetsystems.com/labs/hpc/problems-with-rtx4090-multigpu-and-amd-vs-intel-vs-rtx6000ada-or-rtx3090/</link>
					<comments>https://www.pugetsystems.com/labs/hpc/problems-with-rtx4090-multigpu-and-amd-vs-intel-vs-rtx6000ada-or-rtx3090/#respond</comments>
		
		<dc:creator><![CDATA[Dr. Donald Kinghorn]]></dc:creator>
		<pubDate>Wed, 15 Feb 2023 22:55:52 +0000</pubDate>
				<guid isPermaLink="false">https://www.pugetsystems.com/?post_type=hpc_post&#038;p=13754</guid>

					<description><![CDATA[<p>I was prompted to do some testing by a commenter on one of my recent posts. They had concerns about problems with dual NVIDIA RTX4090s on AMD Threadripper Pro platforms. I ran some applications to reproduce the problems reported above and tried to dig deeper into the issues with more extensive testing. The included table below tells all!  </p>
<p>The post <a href="https://www.pugetsystems.com/labs/hpc/problems-with-rtx4090-multigpu-and-amd-vs-intel-vs-rtx6000ada-or-rtx3090/">Problems With RTX4090 MultiGPU and AMD vs Intel vs RTX6000Ada or RTX3090</a> appeared first on <a href="https://www.pugetsystems.com">Puget Systems</a>.</p>
]]></description>
		
					<wfw:commentRss>https://www.pugetsystems.com/labs/hpc/problems-with-rtx4090-multigpu-and-amd-vs-intel-vs-rtx6000ada-or-rtx3090/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Ryzen 7950x Zen4 AVX512 Performance With AMD AOCCv4 HPL HPCG HPL-MxP</title>
		<link>https://www.pugetsystems.com/labs/hpc/ryzen-7950x-zen4-avx512-performance-with-amd-aoccv4-hpl-hpcg-hpl-mxp/</link>
					<comments>https://www.pugetsystems.com/labs/hpc/ryzen-7950x-zen4-avx512-performance-with-amd-aoccv4-hpl-hpcg-hpl-mxp/#respond</comments>
		
		<dc:creator><![CDATA[Dr. Donald Kinghorn]]></dc:creator>
		<pubDate>Fri, 20 Jan 2023 22:33:29 +0000</pubDate>
				<guid isPermaLink="false">https://www.pugetsystems.com/?post_type=hpc_post&#038;p=13309</guid>

					<description><![CDATA[<p>This post is a first-look at performance of the Ryzen7 7950x CPU using the latest AMD compiler release with support for Zen4 arch including AVX512 vector instructions. Performance is tested using the HPC standard benchmarks, HPL (High Performance Linpack), HPCG (High Performance Conjugate Gradient) and the newer HPC Top500 benchmark, HPL-MxP (formerly HPL-AI).</p>
<p>The post <a href="https://www.pugetsystems.com/labs/hpc/ryzen-7950x-zen4-avx512-performance-with-amd-aoccv4-hpl-hpcg-hpl-mxp/">Ryzen 7950x Zen4 AVX512 Performance With AMD AOCCv4 HPL HPCG HPL-MxP</a> appeared first on <a href="https://www.pugetsystems.com">Puget Systems</a>.</p>
]]></description>
		
					<wfw:commentRss>https://www.pugetsystems.com/labs/hpc/ryzen-7950x-zen4-avx512-performance-with-amd-aoccv4-hpl-hpcg-hpl-mxp/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
