We looked at ShaderGraph in more detail in Unity. It's a very convenient way to write shaders that avoids the tedium of syntax errors. And if you ever need it -- there's the Custom Function node!
Unity has a simple drawing API for debugging called Gizmos. It's super useful to debug linear algebra problems and to work out complicated geometry code. Make use of it!
Unity also exposes pure mathematical objects that cannot be rendered or drawn directly but are useful to help solve problems. We looked at Plane and started looking at Ray, but Unity also has things like Rect and Bounds.
We spent some time looking at Meshes in blender and Unity. We saw blender's representation of vertex position, normal, and uv coordinates which are the most conventional attributes available in modern graphics pipelines. In the first half of the semester we saw that attributes can be anything but in practice tools and engines standardize around conventions like this.
Unity exposes meshes via the Mesh API. You can use this C# API to manipulate meshes on the CPU side. This is quite expensive and in general when possible you should try to do things in a vertex shader. However -- vertex shaders are not able to add, remove, or reconnect vertices -- only reposition existing vertices! If you need more than that, you might need to manipulate your mesh using the Mesh API.
As an example of something you could not do in a vertex shader, we started working on a mesh slicing script that would separate a mesh into two meshes, separated by a user-defined plane. We didn't quite get there -- mesh code is tricky even when you had it working earlier in the morning! -- but here's the working code with extensive comments for you to review and study. Consider it an advanced example of the Mesh API: https://gist.github.com/nasser/6af19c6bd286b10cc869ed9a88894ce6.