One of the most requested topics for the channel in the last year or two has been to cover transformers, the neural network architecture behind large language models and so many other tools in the modern wave of AI.
Today I published the first of what will be several chapters about this topic.
Also, for any of you supporting Patreon (or who would like to start), there’s a draft available for the next chapter digging into the attention mechanism. I always find it incredibly valuable to get some eyes on projects before finalizing and posting them, and in this case I’m guessing it’ll be about a week before the next chapter goes live more broadly.
Best,
Grant
single phase or 3 phase
thanks Grant