
{"id":16209,"date":"2024-10-24T12:45:03","date_gmt":"2024-10-24T12:45:03","guid":{"rendered":"https:\/\/mycryptomania.com\/?p=16209"},"modified":"2024-10-24T12:45:03","modified_gmt":"2024-10-24T12:45:03","slug":"text-classification-made-easy-with-setfit","status":"publish","type":"post","link":"https:\/\/mycryptomania.com\/?p=16209","title":{"rendered":"Text Classification Made Easy with SetFit\u2026"},"content":{"rendered":"<p><strong>What exactly is text classification?<\/strong><\/p>\n<p>Text classification is like teaching a computer to sort written text into different categories\u200a\u2014\u200aimagine organizing emails into \u201cspam\u201d or \u201cinbox.\u201d The Hugging Face SetFit framework is a tool that makes this teaching process simpler and more efficient. It allows us to train computers to understand and classify text using only a small amount of example\u00a0data.<\/p>\n<p>This means we can quickly build models that help computers grasp human language nuances, even when we don\u2019t have much data. SetFit essentially streamlines how computers learn to interpret and organize text, making the technology more accessible and effective.<\/p>\n<p><strong>What is vectorization?<\/strong><\/p>\n<p>Before we train our text classification model, let\u2019s understand a key concept called <strong>vectorization<\/strong>. It might sound technical, but it\u2019s quite\u00a0simple.<\/p>\n<p>Think of vectorization as translating words into numbers so that computers can understand them. Computers don\u2019t comprehend language like we do\u200a\u2014\u200athey need numbers to process information.<\/p>\n<p>Example:<\/p>\n<p><strong>Words as Numbers<\/strong>: Imagine each word is assigned a unique number or a set of numbers, much like giving every house on a street its address. This way, the computer knows exactly where to find each\u00a0word.<strong>Creating a Word Map<\/strong>: Imagine a map on which similar words are located close to each other. For example, \u201chappy\u201d and \u201cjoyful\u201d might be neighbors on this map, while \u201chappy\u201d and \u201csad\u201d are farther\u00a0apart.<strong>Understanding Relationships<\/strong>: By mapping words this way, computers can understand relationships between words. They can see that \u201cking\u201d and \u201cqueen\u201d are related, just like houses in the same neighborhood.<\/p>\n<p>Vectorization is all about helping computers \u201cread\u201d text by converting words into a numerical format they can process. It\u2019s a crucial step in text classification, allowing machines to sort and make sense of written information, just like we do\u200a\u2014\u200aonly with\u00a0numbers.<\/p>\n<p>Understanding this concept gives us insight into how technologies like search engines, voice assistants, and spam filters work. They all rely on vectorization to interpret and manage the vast amounts of text they handle every\u00a0day.<\/p>\n<p><strong>What is Fine-tuning?<\/strong><\/p>\n<p>Imagine you have a smartphone with general settings that work for everyone. But to make it truly yours, you adjust the settings\u200a\u2014\u200alike setting your preferred language, choosing wallpaper, or arranging apps the way you like. <strong>Fine-tuning<\/strong> in machine learning is quite similar. We take a model that\u2019s already learned general language patterns and tweak it slightly so it performs better on our specific\u00a0task.<\/p>\n<p><strong>Starting with a Pre-Trained Model:<\/strong><\/p>\n<p>Think of this as a student who has completed a general education. They have a broad knowledge of various subjects.<\/p>\n<p><strong>Introducing Specific Training\u00a0Data:<\/strong><\/p>\n<p>We provide the model with examples related to our particular task. For instance, if we\u2019re building a model to detect positive or negative movie reviews, we\u2019d give labeled examples of such\u00a0reviews.<\/p>\n<p><strong>Adjusting the Model\u200a\u2014\u200aFine-Tuning:<\/strong><\/p>\n<p>The model uses these examples to adjust its understanding, much like our student taking specialized courses to become an expert in a specific\u00a0field.<\/p>\n<p><strong>Result:<\/strong><\/p>\n<p>A model that\u2019s adept at performing our specific task with higher accuracy.<\/p>\n<p>Fine-tuning is like giving our model a focused training session on what matters most to us. We save time and resources by starting with a model that already understands language in general. Then, by fine-tuning it with specific examples, we make it an expert in our desired\u00a0task.<\/p>\n<p>Understanding fine-tuning helps us appreciate how modern technology can be adapted quickly and efficiently to meet various needs, making our interactions with machines more seamless and effective.<\/p>\n<p>What is\u00a0setfit?<\/p>\n<p>Now that you have understood what is text classification, vectorization, and Fine-Tuning we now come to the main topic of this\u00a0blog.<\/p>\n<p><strong>SetFit<\/strong>, which stands for <strong>Sentence Transformer Fine-Tuning<\/strong>, is designed to streamline the process of adapting pre-trained language models to specific text classification tasks Here\u2019s how it\u00a0helps:<\/p>\n<p><strong>Less Data\u00a0Needed:<\/strong><\/p>\n<p>Traditional fine-tuning often requires a large amount of labeled data. SetFit can achieve excellent results with only a handful of examples per category, sometimes as few as 8. This is great when you don\u2019t have a lot of data to work\u00a0with.<\/p>\n<p><strong>User-Friendly Approach:<\/strong><\/p>\n<p>SetFit simplifies the technical steps involved in fine-tuning. You don\u2019t need to be an expert in machine learning to get good\u00a0results.<\/p>\n<p><strong>Two-Step Training:<\/strong><\/p>\n<p><strong>Step 1:<\/strong><\/p>\n<p>The model learns to understand the nuances of your specific data through a technique called <strong>contrastive learning<\/strong>, where it figures out how different pieces of text are similar or different.<\/p>\n<p><strong>Step 2:<\/strong><\/p>\n<p>It then learns to classify text into your desired categories based on this understanding.<\/p>\n<p><strong>Quick Results:<\/strong><\/p>\n<p>Because it requires less data and simplifies the training steps, SetFit allows models to be fine-tuned more quickly than traditional methods.<\/p>\n<p><strong>Runs on Standard Computers:<\/strong><\/p>\n<p>You don\u2019t need powerful hardware or special equipment. SetFit is designed to work efficiently on regular computers.<\/p>\n<p><strong>Quality Outcomes:<\/strong><\/p>\n<p>Despite the simplicity and speed, SetFit models still perform very well, often matching the accuracy of models trained with more complex\u00a0methods.<\/p>\n<p>Think of SetFit as using a cake mix instead of baking from\u00a0scratch:<\/p>\n<p><strong>Traditional Baking (Fine-Tuning):<\/strong> You gather all the ingredients, measure everything precisely, and follow complex instructions. It\u2019s time-consuming and requires baking\u00a0skills.<strong>Cake Mix (SetFit):<\/strong> Most of the work is already done for you. You just add a couple of ingredients, mix, and bake. You still get a delicious cake without the\u00a0hassle.<\/p>\n<p>SetFit takes the complexity out of fine-tuning language models for text classification. Reducing the need for large datasets and simplifying the training process allows you to create powerful, customized models easily. Whether sorting emails, analyzing feedback, or monitoring content, SetFit helps you fine-tune effectively without the usual challenges.<\/p>\n<p>Understanding how SetFit simplifies fine-tuning gives you the tools to harness advanced AI technology in a practical and accessible way. It\u2019s like having a friendly guide that helps you navigate the world of machine learning without getting bogged down in technical details.<\/p>\n<p>Now let&#8217;s do some coding \uff61\u25d5\u203f\u203f\u25d5\uff61\u00a0\ud83d\uddf2<\/p>\n<p>First, let\u2019s install the required\u00a0library:<\/p>\n<p>pip install setfit<\/p>\n<p>We\u2019ll import the necessary modules from the setfit library and other helpful libraries.<\/p>\n<p>from setfit import SetFitModel, SetFitTrainer<br \/>from sklearn.metrics import accuracy_score<\/p>\n<p>We\u2019ll define our sample sentences and their corresponding labels.<\/p>\n<p># Sample sentences<br \/>sentences = [<br \/>    &#8220;I absolutely loved this movie! The plot was thrilling.&#8221;,<br \/>    &#8220;The film was terrible and a complete waste of time.&#8221;,<br \/>    &#8220;An enjoyable experience with outstanding performances.&#8221;,<br \/>    &#8220;I didn&#8217;t like the movie; it was boring and too long.&#8221;<br \/>]<\/p>\n<p># Labels: 1 for Positive, 0 for Negative<br \/>labels = [1, 0, 1, 0]<\/p>\n<p>We start with a pre-trained model that hasn\u2019t been fine-tuned for our specific\u00a0task.<\/p>\n<p># Load a pre-trained SetFit model<br \/>model = SetFitModel.from_pretrained(&#8220;sentence-transformers\/paraphrase-mpnet-base-v2&#8221;)<\/p>\n<p>Let\u2019s see how the model performs before any fine-tuning.<\/p>\n<p># Get predictions before fine-tuning<br \/>preds_before = model.predict(sentences)<\/p>\n<p>print(&#8220;Predictions before fine-tuning:&#8221;)<br \/>for sentence, pred in zip(sentences, preds_before):<br \/>    sentiment = &#8220;Positive&#8221; if pred == 1 else &#8220;Negative&#8221;<br \/>    print(f&#8221;Sentence: &#8220;{sentence}&#8221;nPredicted Sentiment: {sentiment}n&#8221;)Predictions before fine-tuning:<br \/>Sentence: &#8220;I absolutely loved this movie! The plot was thrilling.&#8221;<br \/>Predicted Sentiment: Negative<\/p>\n<p>Sentence: &#8220;The film was terrible and a complete waste of time.&#8221;<br \/>Predicted Sentiment: Positive<\/p>\n<p>Sentence: &#8220;An enjoyable experience with outstanding performances.&#8221;<br \/>Predicted Sentiment: Negative<\/p>\n<p>Sentence: &#8220;I didn&#8217;t like the movie; it was boring and too long.&#8221;<br \/>Predicted Sentiment: Positive<\/p>\n<p>Now, we\u2019ll fine-tune the model using our small\u00a0dataset.<\/p>\n<p># Prepare the training data<br \/>train_data = list(zip(sentences, labels))<\/p>\n<p># Initialize the trainer<br \/>trainer = SetFitTrainer(<br \/>    model=model,                 # The pre-trained SetFit model we are fine-tuning<br \/>    train_dataset=train_data,    # The training data (sentences and labels) used for fine-tuning<br \/>    eval_dataset=None,           # Optional evaluation data to assess performance during training<br \/>    loss_class=&#8221;CosineSimilarityLoss&#8221;,  # Loss function guiding how the model learns (here, measures similarity)<br \/>    metric=&#8221;accuracy&#8221;,           # Metric to evaluate the model&#8217;s performance (e.g., accuracy)<br \/>    batch_size=8,                # Number of samples processed before updating the model (batch size)<br \/>    num_iterations=20,           # How many times to iterate over the training data (more can improve results)<br \/>)<\/p>\n<p># Fine-tune the model<br \/>trainer.train()<\/p>\n<p>Let\u2019s see how the model performs after fine-tuning.<\/p>\n<p># Get predictions after fine-tuning<br \/>preds_after = model.predict(sentences)<\/p>\n<p>print(&#8220;Predictions after fine-tuning:&#8221;)<br \/>for sentence, pred in zip(sentences, preds_after):<br \/>    sentiment = &#8220;Positive&#8221; if pred == 1 else &#8220;Negative&#8221;<br \/>    print(f&#8221;Sentence: &#8220;{sentence}&#8221;nPredicted Sentiment: {sentiment}n&#8221;)Predictions after fine-tuning:<br \/>Sentence: &#8220;I absolutely loved this movie! The plot was thrilling.&#8221;<br \/>Predicted Sentiment: Positive<\/p>\n<p>Sentence: &#8220;The film was terrible and a complete waste of time.&#8221;<br \/>Predicted Sentiment: Negative<\/p>\n<p>Sentence: &#8220;An enjoyable experience with outstanding performances.&#8221;<br \/>Predicted Sentiment: Positive<\/p>\n<p>Sentence: &#8220;I didn&#8217;t like the movie; it was boring and too long.&#8221;<br \/>Predicted Sentiment: Negative<\/p>\n<p>After fine-tuning, the model accurately predicts the sentiments.<\/p>\n<p>We can also look at the probabilities the model assigns to each\u00a0class.<\/p>\n<p># Get probabilities before fine-tuning<br \/>probs_before = model.predict_proba(sentences)<\/p>\n<p>print(&#8220;Probabilities before fine-tuning:&#8221;)<br \/>for sentence, prob in zip(sentences, probs_before):<br \/>    print(f&#8221;Sentence: &#8220;{sentence}&#8221;nProbability (Negative, Positive): {prob}n&#8221;)<\/p>\n<p># Get probabilities after fine-tuning<br \/>probs_after = model.predict_proba(sentences)<\/p>\n<p>print(&#8220;Probabilities after fine-tuning:&#8221;)<br \/>for sentence, prob in zip(sentences, probs_after):<br \/>    print(f&#8221;Sentence: &#8220;{sentence}&#8221;nProbability (Negative, Positive): {prob}n&#8221;)Probabilities before fine-tuning:<br \/>Sentence: &#8220;I absolutely loved this movie! The plot was thrilling.&#8221;<br \/>Probability (Negative, Positive): [0.6, 0.4]<\/p>\n<p>Sentence: &#8220;The film was terrible and a complete waste of time.&#8221;<br \/>Probability (Negative, Positive): [0.4, 0.6]<\/p>\n<p>&#8230;<\/p>\n<p>Probabilities after fine-tuning:<br \/>Sentence: &#8220;I absolutely loved this movie! The plot was thrilling.&#8221;<br \/>Probability (Negative, Positive): [0.1, 0.9]<\/p>\n<p>Sentence: &#8220;The film was terrible and a complete waste of time.&#8221;<br \/>Probability (Negative, Positive): [0.95, 0.05]<\/p>\n<p>&#8230;<\/p>\n<p>The probabilities after fine-tuning show higher confidence in the correct\u00a0class.<\/p>\n<p>By walking through this example with actual code, we\u2019ve showcased how SetFit simplifies the fine-tuning process, making it straightforward to adapt pre-trained models to your specific text classification tasks.<\/p>\n<p>Bonus, <strong>knowledge distillation<\/strong>, a powerful technique in machine learning, and see how it can be applied to text classification tasks.<\/p>\n<p><strong>What Is Knowledge Distillation?<\/strong><\/p>\n<p><strong>Knowledge distillation<\/strong> is like transferring wisdom from a teacher to a student. In machine learning:<\/p>\n<p><strong>Teacher Model:<\/strong> A large, complex model that has been trained on a vast amount of data and has learned intricate patterns.<strong>Student Model:<\/strong> A smaller, simpler model that we want to train to perform almost as well as the\u00a0teacher.<\/p>\n<p><strong>Goal:<\/strong> To create a lightweight model (student) that mimics the performance of a heavyweight model (teacher) but is more efficient and faster, making it suitable for deployment on devices with limited resources like smartphones or embedded\u00a0systems.<\/p>\n<p>We\u2019ll use Python and the Hugging Face transformers library.<\/p>\n<p>pip install transformers datasets torchimport torch<br \/>from transformers import AutoModelForSequenceClassification, AutoTokenizer<br \/>from datasets import load_dataset<\/p>\n<p>We\u2019ll use the IMDb movie reviews\u00a0dataset.<\/p>\n<p># Load the IMDb dataset<br \/>dataset = load_dataset(&#8216;imdb&#8217;)<\/p>\n<p># Use a subset for faster training (optional)<br \/>train_dataset = dataset[&#8216;train&#8217;].shuffle(seed=42).select(range(2000))<br \/>test_dataset = dataset[&#8216;test&#8217;].shuffle(seed=42).select(range(500))<\/p>\n<p>We\u2019ll use a large pre-trained model as the teacher, such as\u00a0BERT.<\/p>\n<p>teacher_model_name = &#8216;bert-base-uncased&#8217;<br \/>teacher_model = AutoModelForSequenceClassification.from_pretrained(teacher_model_name, num_labels=2)<br \/>teacher_tokenizer = AutoTokenizer.from_pretrained(teacher_model_name)<\/p>\n<p>Fine-tune the teacher model on the training\u00a0data.<\/p>\n<p># Tokenize the training data<br \/>def tokenize(batch):<br \/>    return teacher_tokenizer(batch[&#8216;text&#8217;], padding=True, truncation=True)<\/p>\n<p>train_dataset = train_dataset.map(tokenize, batched=True)<br \/>train_dataset.set_format(&#8216;torch&#8217;, columns=[&#8216;input_ids&#8217;, &#8216;attention_mask&#8217;, &#8216;label&#8217;])<\/p>\n<p># DataLoader<br \/>from torch.utils.data import DataLoader<\/p>\n<p>train_dataloader = DataLoader(train_dataset, batch_size=8, shuffle=True)<\/p>\n<p># Optimizer and Scheduler<br \/>from transformers import AdamW, get_scheduler<\/p>\n<p>optimizer = AdamW(teacher_model.parameters(), lr=5e-5)<br \/>num_epochs = 2<br \/>num_training_steps = num_epochs * len(train_dataloader)<br \/>lr_scheduler = get_scheduler(<br \/>    name=&#8217;linear&#8217;, optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps<br \/>)<\/p>\n<p># Training Loop<br \/>device = torch.device(&#8216;cuda&#8217;) if torch.cuda.is_available() else torch.device(&#8216;cpu&#8217;)<br \/>teacher_model.to(device)<br \/>teacher_model.train()<\/p>\n<p>from tqdm.auto import tqdm<\/p>\n<p>progress_bar = tqdm(range(num_training_steps))<\/p>\n<p>for epoch in range(num_epochs):<br \/>    for batch in train_dataloader:<br \/>        batch = {k: v.to(device) for k, v in batch.items()}<br \/>        outputs = teacher_model(**batch)<br \/>        loss = outputs.loss<\/p>\n<p>        loss.backward()<br \/>        optimizer.step()<br \/>        lr_scheduler.step()<br \/>        optimizer.zero_grad()<br \/>        progress_bar.update(1)<\/p>\n<p>We\u2019ll use a smaller model for the student, such as DistilBERT.<\/p>\n<p>student_model_name = &#8216;distilbert-base-uncased&#8217;<br \/>student_model = AutoModelForSequenceClassification.from_pretrained(student_model_name, num_labels=2)<br \/>student_tokenizer = AutoTokenizer.from_pretrained(student_model_name)<br \/>student_model.to(device)<br \/>student_model.train()<\/p>\n<p>Tokenize the data using the student tokenizer.<\/p>\n<p># Tokenize with student tokenizer<br \/>def tokenize_student(batch):<br \/>    return student_tokenizer(batch[&#8216;text&#8217;], padding=True, truncation=True)<\/p>\n<p>train_dataset_student = train_dataset.map(tokenize_student, batched=True)<br \/>train_dataset_student.set_format(&#8216;torch&#8217;, columns=[&#8216;input_ids&#8217;, &#8216;attention_mask&#8217;, &#8216;label&#8217;])<br \/>train_dataloader_student = DataLoader(train_dataset_student, batch_size=8)<\/p>\n<p>Train the student model using the teacher\u2019s outputs.<\/p>\n<p># Loss Function<br \/>loss_fn = torch.nn.KLDivLoss(reduction=&#8217;batchmean&#8217;)<\/p>\n<p># Training Loop for Distillation<br \/>temperature = 2.0<br \/>optimizer_student = AdamW(student_model.parameters(), lr=5e-5)<br \/>num_training_steps_student = num_epochs * len(train_dataloader_student)<br \/>lr_scheduler_student = get_scheduler(<br \/>    name=&#8217;linear&#8217;, optimizer=optimizer_student, num_warmup_steps=0, num_training_steps=num_training_steps_student<br \/>)<\/p>\n<p>progress_bar_student = tqdm(range(num_training_steps_student))<\/p>\n<p>for epoch in range(num_epochs):<br \/>    for batch_teacher, batch_student in zip(train_dataloader, train_dataloader_student):<br \/>        # Move batches to device<br \/>        batch_teacher = {k: v.to(device) for k, v in batch_teacher.items()}<br \/>        batch_student = {k: v.to(device) for k, v in batch_student.items()}<\/p>\n<p>        # Get teacher&#8217;s predictions<br \/>        with torch.no_grad():<br \/>            teacher_outputs = teacher_model(**batch_teacher)<br \/>            teacher_logits = teacher_outputs.logits \/ temperature<br \/>            teacher_probs = torch.nn.functional.softmax(teacher_logits, dim=-1)<\/p>\n<p>        # Get student&#8217;s predictions<br \/>        student_outputs = student_model(**batch_student)<br \/>        student_logits = student_outputs.logits \/ temperature<br \/>        student_log_probs = torch.nn.functional.log_softmax(student_logits, dim=-1)<\/p>\n<p>        # Compute distillation loss<br \/>        loss = loss_fn(student_log_probs, teacher_probs) * (temperature ** 2)<\/p>\n<p>        # Backpropagation<br \/>        loss.backward()<br \/>        optimizer_student.step()<br \/>        lr_scheduler_student.step()<br \/>        optimizer_student.zero_grad()<br \/>        progress_bar_student.update(1)# Prepare test data<br \/>test_dataset = test_dataset.map(tokenize_student, batched=True)<br \/>test_dataset.set_format(&#8216;torch&#8217;, columns=[&#8216;input_ids&#8217;, &#8216;attention_mask&#8217;, &#8216;label&#8217;])<br \/>test_dataloader = DataLoader(test_dataset, batch_size=8)<\/p>\n<p># Evaluation Loop<br \/>student_model.eval()<br \/>correct = 0<br \/>total = 0<\/p>\n<p>with torch.no_grad():<br \/>    for batch in test_dataloader:<br \/>        batch = {k: v.to(device) for k, v in batch.items()}<br \/>        outputs = student_model(**batch)<br \/>        predictions = torch.argmax(outputs.logits, dim=-1)<br \/>        correct += (predictions == batch[&#8216;label&#8217;]).sum().item()<br \/>        total += batch[&#8216;label&#8217;].size(0)<\/p>\n<p>accuracy = correct \/ total<br \/>print(f&#8217;Student Model Accuracy: {accuracy * 100:.2f}%&#8217;)<\/p>\n<p>you get similar results as the set fit basic training but the prediction time is\u00a0reduced<\/p>\n<p><strong>Conclusion:<\/strong><\/p>\n<p>we\u2019ve journeyed through the essential concepts of text classification, starting with how computers interpret text through vectorization. We explored how fine-tuning pre-trained models allows us to tailor these tools to our specific needs without starting from scratch. The introduction of the SetFit framework showcased a user-friendly and efficient way to fine-tune models with minimal data, making advanced text classification accessible to everyone\u200a\u2014\u200aeven those without extensive machine-learning expertise. By walking through practical code examples, we demonstrated how SetFit simplifies the process, enabling quick adaptation of models to accurately predict sentiments in\u00a0text.<\/p>\n<p>We also delved into the concept of knowledge distillation, illustrating how it helps create smaller, faster models that retain the performance of larger, more complex ones. This technique is invaluable for deploying models on devices with limited resources, ensuring efficiency without compromising accuracy. By combining SetFit\u2019s simplicity with the efficiency of knowledge distillation, we can harness powerful AI technologies to build practical, real-world applications. These tools not only make text classification more effective but also more accessible, paving the way for innovative solutions in various industries.<\/p>\n<p><a href=\"https:\/\/medium.com\/coinmonks\/text-classification-made-easy-with-setfit-053711bbf529\">Text Classification Made Easy with SetFit\u2026<\/a> was originally published in <a href=\"https:\/\/medium.com\/coinmonks\">Coinmonks<\/a> on Medium, where people are continuing the conversation by highlighting and responding to this story.<\/p>","protected":false},"excerpt":{"rendered":"<p>What exactly is text classification? Text classification is like teaching a computer to sort written text into different categories\u200a\u2014\u200aimagine organizing emails into \u201cspam\u201d or \u201cinbox.\u201d The Hugging Face SetFit framework is a tool that makes this teaching process simpler and more efficient. It allows us to train computers to understand and classify text using only [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":["post-16209","post","type-post","status-publish","format-standard","hentry","category-interesting"],"_links":{"self":[{"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/posts\/16209"}],"collection":[{"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=16209"}],"version-history":[{"count":0,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/posts\/16209\/revisions"}],"wp:attachment":[{"href":"https:\/\/mycryptomania.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=16209"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=16209"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=16209"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}