Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/django/21.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何向django中的views.py文件添加一个条件,如果数据库中已经存在相同的数据对象,则不创建相同的数据对象?_Python_Django_Sqlite_Validation - Fatal编程技术网

Python 如何向django中的views.py文件添加一个条件,如果数据库中已经存在相同的数据对象,则不创建相同的数据对象?

Python 如何向django中的views.py文件添加一个条件,如果数据库中已经存在相同的数据对象,则不创建相同的数据对象?,python,django,sqlite,validation,Python,Django,Sqlite,Validation,以下是views.py文件:- 我尝试过不同的方法,比如在model.py文件中添加unique=True,但仍然不起作用,抛出更多错误。我无法添加或计算要添加到views.py文件中的代码的确切条件行,以便它检查数据是否已经存在。在数据库中,如果不存在,它会添加刮取的数据,否则不会有任何影响。我认为您可以使用get\u或\u create,例如News.objects.get\u或\u create(title=XYZ),并使title字段唯一 from django.shortcuts im

以下是views.py文件:-


我尝试过不同的方法,比如在model.py文件中添加
unique=True
,但仍然不起作用,抛出更多错误。我无法添加或计算要添加到views.py文件中的代码的确切条件行,以便它检查数据是否已经存在。在数据库中,如果不存在,它会添加刮取的数据,否则不会有任何影响。

我认为您可以使用
get\u或\u create
,例如News.objects.get\u或\u create(title=XYZ),并使
title
字段唯一

from django.shortcuts import render
from .models import News
from django.core.paginator import Paginator
from django.db.models import Q
# For scraping part
import requests
from bs4 import BeautifulSoup


def news_list(request, *args, **kwargs):
    # fOR scraping part - START::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    response = requests.get("http://www.iitg.ac.in/home/eventsall/events")
    soup = BeautifulSoup(response.content,"html.parser")
    cards = soup.find_all("div", attrs={"class": "newsarea"})

    iitg_title = []
    iitg_date = []
    iitg_link = []
    for card in cards[0:6]:
        iitg_date.append(card.find("div", attrs={"class": "ndate"}).text)
        iitg_title.append(card.find("div", attrs={"class": "ntitle"}).text.strip())
        iitg_link.append(card.find("div", attrs={"class": "ntitle"}).a['href'])
    # fOR scraping part - END::::::::::::::::::::::::::::::::::::::::::::::::::::::::

    # fOR storing the scraped data directly into the dtatbase from the views.py file - START---------------------------------------------------------------
    for i in range(len(iitg_title)):
        News.objects.create(title = iitg_title[i], datess = iitg_date[i], linkss = iitg_link[i])
    # fOR storing the scraped data directly into the dtatbase from the views.py file - END-----------------------------------------------------------------

    queryset = News.objects.all()   #Getting all the objects from the database

    search_query = request.GET.get('q')
    if search_query:
        queryset = queryset.filter(
            Q(title__icontains = search_query) |
            Q(description__icontains = search_query)
        )

    paginator = Paginator(queryset, 5)  #Adding pagination
    page_number = request.GET.get('page')
    queryset = paginator.get_page(page_number)

    context = {
       'object_list': queryset
    }

    return render(request, 'news_list.html', context)