Sometimes your finetuned language Model work as expected but you need a faster inference time. Some other times you need to reduce memory footprint. By transformring your models to the GGUF format you can store quantized models and using them on top of the fast llama.cpp inference engine. [5 min read]
What are the benefits of using Hugging Face for sharing your datasets? Not sure really, but let's try it to see what all this hype is all about [5 min read]
Share your model, your dataset, provide a simple mechanisms for using them. That is what research is all about. Hugging Face provides you with a great infrasctruture for doing that an a little more. [5 min read]
DGA is a mechanism used by malware for establishing contact with the C2 channel. This is the second post of the series for creating a simple DGA using techniques for text generation. In particular, CNN uses Keras and Tensorflow for R. [6 min read]
The use of artificial intelligence (AI) algorithms in various fields are becoming an integral part of our lives. While some people are opposed to their use others have embraced the technology and are using it. I am one of them. [6 min read]