top of page
Search

Week 5: Picking Parts

  • Oct 14, 2019
  • 3 min read

Weekly Recap - Alex Esclamado


This past week we performed another walkthrough with Dr. Asghari. We learned about the trigger modules above the workbench that dictate the frequencies at which the sample is scanned. Because we want a 100 x 100 x 100 image, the scan operates at a 100:1 ratio. This process mimics that of a printer in which horizontally it quickly bounces between edges producing the image of that segment while it slowly moves down the page.


Additionally, we were able to produce an image on the oscilloscope and became more familiar with the Python code and how the two interact. We learned how to troubleshoot the oscilloscope and PC connection, verifying the ethernet cables reached the same hub and checking if the PC sees the scope via the Keysight Connection Expert application.


We were given a handful of tips to remember when working with this connection. When the main code is run and data is streaming from the scope to the computer, manual adjustments on the oscilloscope are disabled. Therefore, the parameters are set within the code. Additionally, the oscilloscope tends to run faster when it focuses solely on streaming data rather than displaying the signals simultaneously. So, within the code the oscilloscope’s display is turned off. We practiced changing these parameters on the computer, such as the trigger settings that synchronized the different signals.


After discussing the potential need for a new PC in the laboratory, Trevor and I drafted potential builds using the PCPartPicker website. This site is very helpful for its wide and updated library of components as well as its parametric filters to narrow it down. Additionally, it holds current pricing from several online vendors. The builds were made with the pretense of having a reasonably high budget of approximately $2000 - $3000. In the context of this project, we found that utilizing the much more expensive processors such as Intel’s i9-9980XE or AMD’s Threadripper would be entirely unnecessary.


Part List:


Background Research - Ryzen vs. Intel - Alex Esclamado


We chose AMD’s new Ryzen 3000 series as the basis for our potential build. We believed a Ryzen processor to be our best option after the consideration of many factors including but not limited to core count and threaded task performance. The Ryzen 3000 series processors boast more cores and threads compared to Intel’s latest Coffee Lake processors. More cores and threads translates to better productivity in multi-threaded tasks and using multiple applications in the background. Even though Intel boasts stronger single-core performance, the AMD platform does not fall far behind.


Often, Ryzen processors are used for workstations as they are better suited for video editing and rendering lots of data. Similar processes are used for this project and therefore can further stand to benefit from an AMD CPU.


3D Rendering Components Research and Usage Analysis - Trevor Wong


This week I researched the top PC components for 3D Image Processing and Rendering. My research showed that on a CPU, a higher core count performs better than a faster CPU. However, with the addition of a GPU, a CPU with a fast core clock is beneficial. This lead me to propose the AMD Ryzen 9 3900X, as it not only has 12 cores, but also a 3.8 GHz clock speed with a boost of 4.6 GHz. This satisfies both the criteria of a high core count and clock speed, while also being a high-end consumer CPU. This notion is further verified by having a high Cinemark r5 performance on cgdirector.com. As for the GPU, the trend tends to be that NVIDIA create the more powerful cards, while AMD build the more budget-friendly cards. Thus the newest line of NVIDIA RTX SUPER cards was our best bet, particularly the RTX 2080 SUPER.



Figure 1: CPU and Memory Usage on Idle

Figure 2: CPU and Memory Usage While Running Code

It can be seen by comparing Figure 1 and Figure 2 that the laser system uses approximately 3 GB of memory and 50% more CPU usage. Due to this CPU having integrated graphics, there is no GPU usage shown. Even so, the CPU usage is not as high as we expected. My hypothesis is that there is something capping the CPU usage. A possible reason for this is that the data is not being transferred to the PC quick enough to utilize the CPU’s full potential. Another is that the CPU is capping itself, as it is not the best equipped to run this type of task and therefore it cannot max out its usage. It is also possible that the program only needs this percentage of CPU usage. For memory usage, it can be seen that the 16 GB this computer has is plenty for the current system. However, I do expect the RAM usage to increase when the oscilloscope and PC have a faster connection between them.


Citations:

 
 
 

Recent Posts

See All
Week 12b: Faster and Faster

This past week we were able to improve upon our GPU processing code. We found that when converting framing from the CPU to the GPU it was...

 
 
 
Week 11b: NumPy to CuPy

This past week we were able to successfully implement all of the processing with CuPy. While our previous workarounds for implementing...

 
 
 

Comments


©2019 by Ultra-Fast Imaging. Proudly created with Wix.com

bottom of page