- channel.basic_publish(exchange='',
- routing_key="task_queue",
- body=message,
- properties=pika.BasicProperties(
- delivery_mode = 2, # make message persistent
- ))
如果 Rabbit 只管按顺序把消息发到各个消费者身上,不考虑消费者负载的话,很可能出现,一个机器配置不高的消费者那里堆积了很多消息处理不完,同时配置高的消费者却一直很轻松。为解决此问题,可以在各个消费者端,配置 perfetch=1, 意思就是告诉 RabbitMQ 在我这个消费者当前消息还没处理完的时候就不要再给我发新消息了。
- channel.basic_qos(prefetch_count=1)
带消息持久化 + 公平分发的完整代码
生产者端
- #!/usr/bin/env python
- import pika
- import sys
- connection = pika.BlockingConnection(pika.ConnectionParameters(
- host='localhost'))
- channel = connection.channel()
- channel.queue_declare(queue='task_queue', durable=True)
- message = ' '.join(sys.argv[1:]) or "Hello World!"
- channel.basic_publish(exchange='',
- routing_key='task_queue',
- body=message,
- properties=pika.BasicProperties(
- delivery_mode = 2, # make message persistent
- ))
- print(" [x] Sent %r" % message)
- connection.close()
消费者端
- #!/usr/bin/env python
- import pika
- import time
- connection = pika.BlockingConnection(pika.ConnectionParameters(
- host='localhost'))
- channel = connection.channel()
- channel.queue_declare(queue='task_queue', durable=True)
- print(' [*] Waiting for messages. To exit press CTRL+C')
- def callback(ch, method, properties, body):
- print(" [x] Received %r" % body)
- time.sleep(body.count(b'.'))
- print(" [x] Done")
- ch.basic_ack(delivery_tag = method.delivery_tag)
- channel.basic_qos(prefetch_count=1)
- channel.basic_consume(callback,
- queue='task_queue')
- channel.start_consuming()
之前的例子都基本都是 1 对 1 的消息发送和接收,即消息只能发送到指定的 queue 里,但有些时候你想让你的消息被所有的 Queue 收到,类似广播的效果,这时候就要用到 exchange 了,
An exchange is a very simple thing. On one side it receives messages from producers and the other side it pushes them to queues. The exchange must know exactly what to do with a message it receives. Should it be appended to a particular queue? Should it be appended to many queues? Or should it get discarded. The rules for that are defined by the
Exchange 在定义的时候是有类型的,以决定到底是哪些 Queue 符合条件,可以接收消息
fanout: 所有 bind 到此 exchange 的 queue 都可以接收消息
direct: 通过 routingKey 和 exchange 决定的那个唯一的 queue 可以接收消息
topic: 所有符合 routingKey(此时可以是一个表达式) 的 routingKey 所 bind 的 queue 可以接收消息
表达式符号说明:# 代表一个或多个字符,* 代表任何字符
例:#.a 会匹配 a.a,aa.a,aaa.a 等
*.a 会匹配 a.a,b.a,c.a 等
注:使用 RoutingKey 为 #,Exchange Type 为 topic 的时候相当于使用 fanout
headers: 通过 headers 来决定把消息发给哪些 queue
消息 publisher
- import pika
- import sys
- connection = pika.BlockingConnection(pika.ConnectionParameters(
- host='localhost'))
- channel = connection.channel()
- channel.exchange_declare(exchange='logs',
- type='fanout')
- message = ' '.join(sys.argv[1:]) or "info: Hello World!"
- channel.basic_publish(exchange='logs',
- routing_key='',
- body=message)
- print(" [x] Sent %r" % message)
- connection.close()
消息 subscriber
- #_*_coding:utf-8_*_
- __author__ = 'Alex Li'
- import pika
- connection = pika.BlockingConnection(pika.ConnectionParameters(
- host='localhost'))
- channel = connection.channel()
- channel.exchange_declare(exchange='logs',
- type='fanout')
- result = channel.queue_declare(exclusive=True) #不指定queue名字,rabbit会随机分配一个名字,exclusive=True会在使用此queue的消费者断开后,自动将queue删除
- queue_name = result.method.queue
- channel.queue_bind(exchange='logs',
- queue=queue_name)
- print(' [*] Waiting for logs. To exit press CTRL+C')
- def callback(ch, method, properties, body):
- print(" [x] %r" % body)
- channel.basic_consume(callback,
- queue=queue_name,
- no_ack=True)
- channel.start_consuming()
RabbitMQ 还支持根据关键字发送,即:队列绑定关键字,发送者将数据根据关键字发送到消息 exchange,exchange 根据 关键字 判定应该将数据发送至指定队列。
- import pika
- import sys
- connection = pika.BlockingConnection(pika.ConnectionParameters(
- host='localhost'))
- channel = connection.channel()
- channel.exchange_declare(exchange='direct_logs',
- type='direct')
- severity = sys.argv[1] if len(sys.argv) > 1 else 'info'
- message = ' '.join(sys.argv[2:]) or 'Hello World!'
- channel.basic_publish(exchange='direct_logs',
- routing_key=severity,
- body=message)
- print(" [x] Sent %r:%r" % (severity, message))
- connection.close()
- import pika
- import sys
- connection = pika.BlockingConnection(pika.ConnectionParameters(
- host='localhost'))
- channel = connection.channel()
- channel.exchange_declare(exchange='direct_logs',
- type='direct')
- result = channel.queue_declare(exclusive=True)
- queue_name = result.method.queue
- severities = sys.argv[1:]
- if not severities:
- sys.stderr.write("Usage: %s [info] [warning] [error]\n" % sys.argv[0])
- sys.exit(1)
- for severity in severities:
- channel.queue_bind(exchange='direct_logs',
- queue=queue_name,
- routing_key=severity)
- print(' [*] Waiting for logs. To exit press CTRL+C')
- def callback(ch, method, properties, body):
- print(" [x] %r:%r" % (method.routing_key, body))
- channel.basic_consume(callback,
- queue=queue_name,
- no_ack=True)
- channel.start_consuming()
Although using the direct exchange improved our system, it still has limitations - it can't do routing based on multiple criteria.
In our logging system we might want to subscribe to not only logs based on severity, but also based on the source which emitted the log. You might know this concept from the unix tool, which routes logs based on both severity (info/warn/crit...) and facility (auth/cron/kern...).
That would give us a lot of flexibility - we may want to listen to just critical errors coming from 'cron' but also all logs from 'kern'.
publisher
- import pika
- import sys
- connection = pika.BlockingConnection(pika.ConnectionParameters(
- host='localhost'))
- channel = connection.channel()
- channel.exchange_declare(exchange='topic_logs',
- type='topic')
- routing_key = sys.argv[1] if len(sys.argv) > 1 else 'anonymous.info'
- message = ' '.join(sys.argv[2:]) or 'Hello World!'
- channel.basic_publish(exchange='topic_logs',
- routing_key=routing_key,
- body=message)
- print(" [x] Sent %r:%r" % (routing_key, message))
- connection.close()
subscriber
- import pika
- import sys
- connection = pika.BlockingConnection(pika.ConnectionParameters(
- host='localhost'))
- channel = connection.channel()
- channel.exchange_declare(exchange='topic_logs',
- type='topic')
- result = channel.queue_declare(exclusive=True)
- queue_name = result.method.queue
- binding_keys = sys.argv[1:]
- if not binding_keys:
- sys.stderr.write("Usage: %s [binding_key]...\n" % sys.argv[0])
- sys.exit(1)
- for binding_key in binding_keys:
- channel.queue_bind(exchange='topic_logs',
- queue=queue_name,
- routing_key=binding_key)
- print(' [*] Waiting for logs. To exit press CTRL+C')
- def callback(ch, method, properties, body):
- print(" [x] %r:%r" % (method.routing_key, body))
- channel.basic_consume(callback,
- queue=queue_name,
- no_ack=True)
- channel.start_consuming()
To receive all the logs run:
- python receive_logs_topic.py "#"
To receive all logs from the facility "kern":
- python receive_logs_topic.py "kern.*"
Or if you want to hear only about "critical" logs:
- python receive_logs_topic.py "*.critical"
You can create multiple bindings:
- python receive_logs_topic.py "kern.*" "*.critical"
And to emit a log with a routing key "kern.critical" type:
- python emit_log_topic.py "kern.critical" "A critical kernel error"
To illustrate how an RPC service could be used we're going to create a simple client class. It's going to expose a method named call which sends an RPC request and blocks until the answer is received:
- fibonacci_rpc = FibonacciRpcClient()
- result = fibonacci_rpc.call(4)
- print("fib(4) is %r" % result)
RPC server
- #_*_coding:utf-8_*_
- __author__ = 'Alex Li'
- import pika
- import time
- connection = pika.BlockingConnection(pika.ConnectionParameters(
- host='localhost'))
- channel = connection.channel()
- channel.queue_declare(queue='rpc_queue')
- def fib(n):
- if n == 0:
- return 0
- elif n == 1:
- return 1
- else:
- return fib(n-1) + fib(n-2)
- def on_request(ch, method, props, body):
- n = int(body)
- print(" [.] fib(%s)" % n)
- response = fib(n)
- ch.basic_publish(exchange='',
- routing_key=props.reply_to,
- properties=pika.BasicProperties(correlation_id = props.correlation_id),
- body=str(response))
- ch.basic_ack(delivery_tag = method.delivery_tag)
- channel.basic_qos(prefetch_count=1)
- channel.basic_consume(on_request, queue='rpc_queue')
- print(" [x] Awaiting RPC requests")
- channel.start_consuming()
RPC client
- import pika
- import uuid
- class FibonacciRpcClient(object):
- def __init__(self):
- self.connection = pika.BlockingConnection(pika.ConnectionParameters(
- host='localhost'))
- self.channel = self.connection.channel()
- result = self.channel.queue_declare(exclusive=True)
- self.callback_queue = result.method.queue
- self.channel.basic_consume(self.on_response, no_ack=True,
- queue=self.callback_queue)
- def on_response(self, ch, method, props, body):
- if self.corr_id == props.correlation_id:
- self.response = body
- def call(self, n):
- self.response = None
- self.corr_id = str(uuid.uuid4())
- self.channel.basic_publish(exchange='',
- routing_key='rpc_queue',
- properties=pika.BasicProperties(
- reply_to = self.callback_queue,
- correlation_id = self.corr_id,
- ),
- body=str(n))
- while self.response is None:
- self.connection.process_data_events()
- return int(self.response)
- fibonacci_rpc = FibonacciRpcClient()
- print(" [x] Requesting fib(30)")
- response = fibonacci_rpc.call(30)
- print(" [.] Got %r" % response)
http://www.cnblogs.com/wupeiqi/articles/5132791.html
Twisted 是一个事件驱动的网络框架,其中包含了诸多功能,例如:网络协议、线程、数据库管理、网络操作、电子邮件等。
事件驱动
简而言之,事件驱动分为二个部分:第一,注册事件;第二,触发事件。
自定义事件驱动框架,命名为:"弑君者":
- #!/usr/bin/env python
- # -*- coding:utf-8 -*-
- # event_drive.py
- event_list = []
- def run():
- for event in event_list:
- obj = event()
- obj.execute()
- class BaseHandler(object):
- """
- 用户必须继承该类,从而规范所有类的方法(类似于接口的功能)
- """
- def execute(self):
- raise Exception('you must overwrite execute')
- 最牛逼的事件驱动框架
程序员使用 "弑君者框架":
- #!/usr/bin/env python
- # -*- coding:utf-8 -*-
- from source import event_drive
- class MyHandler(event_drive.BaseHandler):
- def execute(self):
- print 'event-drive execute MyHandler'
- event_drive.event_list.append(MyHandler)
- event_drive.run()
Protocols 描述了如何以异步的方式处理网络中的事件。HTTP、DNS 以及 IMAP 是应用层协议中的例子。Protocols 实现了 IProtocol 接口,它包含如下的方法:
- makeConnection 在transport对象和服务器之间建立一条连接
- connectionMade 连接建立起来后调用
- dataReceived 接收数据时调用
- connectionLost 关闭连接时调用
Transports 代表网络中两个通信结点之间的连接。Transports 负责描述连接的细节,比如连接是面向流式的还是面向数据报的,流控以及可靠性。TCP、UDP 和 Unix 套接字可作为 transports 的例子。它们被设计为 "满足最小功能单元,同时具有最大程度的可复用性",而且从协议实现中分离出来,这让许多协议可以采用相同类型的传输。Transports 实现了 ITransports 接口,它包含如下的方法:
- write 以非阻塞的方式按顺序依次将数据写到物理连接上
- writeSequence 将一个字符串列表写到物理连接上
- loseConnection 将所有挂起的数据写入,然后关闭连接
- getPeer 取得连接中对端的地址信息
- getHost 取得连接中本端的地址信息
将 transports 从协议中分离出来也使得对这两个层次的测试变得更加简单。可以通过简单地写入一个字符串来模拟传输,用这种方式来检查。
EchoServer
- from twisted.internet import protocol
- from twisted.internet import reactor
- class Echo(protocol.Protocol):
- def dataReceived(self, data):
- self.transport.write(data)
- def main():
- factory = protocol.ServerFactory()
- factory.protocol = Echo
- reactor.listenTCP(1234,factory)
- reactor.run()
- if __name__ == '__main__':
- main()
EchoClient
- from twisted.internet import reactor, protocol
- # a client protocol
- class EchoClient(protocol.Protocol):
- """Once connected, send a message, then print the result."""
- def connectionMade(self):
- self.transport.write("hello alex!")
- def dataReceived(self, data):
- "As soon as any data is received, write it back."
- print "Server said:", data
- self.transport.loseConnection()
- def connectionLost(self, reason):
- print "connection lost"
- class EchoFactory(protocol.ClientFactory):
- protocol = EchoClient
- def clientConnectionFailed(self, connector, reason):
- print "Connection failed - goodbye!"
- reactor.stop()
- def clientConnectionLost(self, connector, reason):
- print "Connection lost - goodbye!"
- reactor.stop()
- # this connects the protocol to a server running on port 8000
- def main():
- f = EchoFactory()
- reactor.connectTCP("localhost", 1234, f)
- reactor.run()
- # this only runs if the module was *not* imported
- if __name__ == '__main__':
- main()
运行服务器端脚本将启动一个 TCP 服务器,监听端口 1234 上的连接。服务器采用的是 Echo 协议,数据经 TCP transport 对象写出。运行客户端脚本将对服务器发起一个 TCP 连接,回显服务器端的回应然后终止连接并停止 reactor 事件循环。这里的 Factory 用来对连接的双方生成 protocol 对象实例。两端的通信是异步的,connectTCP 负责注册回调函数到 reactor 事件循环中,当 socket 上有数据可读时通知回调处理。
server side
- #_*_coding:utf-8_*_
- # This is the Twisted Fast Poetry Server, version 1.0
- import optparse, os
- from twisted.internet.protocol import ServerFactory, Protocol
- def parse_args():
- usage = """usage: %prog [options] poetry-file
- This is the Fast Poetry Server, Twisted edition.
- Run it like this:
- python fastpoetry.py <path-to-poetry-file>
- If you are in the base directory of the twisted-intro package,
- you could run it like this:
- python twisted-server-1/fastpoetry.py poetry/ecstasy.txt
- to serve up John Donne's Ecstasy, which I know you want to do.
- """
- parser = optparse.OptionParser(usage)
- help = "The port to listen on. Default to a random available port."
- parser.add_option('--port', type='int', help=help)
- help = "The interface to listen on. Default is localhost."
- parser.add_option('--iface', help=help, default='localhost')
- options, args = parser.parse_args()
- print("--arg:",options,args)
- if len(args) != 1:
- parser.error('Provide exactly one poetry file.')
- poetry_file = args[0]
- if not os.path.exists(args[0]):
- parser.error('No such file: %s' % poetry_file)
- return options, poetry_file
- class PoetryProtocol(Protocol):
- def connectionMade(self):
- self.transport.write(self.factory.poem)
- self.transport.loseConnection()
- class PoetryFactory(ServerFactory):
- protocol = PoetryProtocol
- def __init__(self, poem):
- self.poem = poem
- def main():
- options, poetry_file = parse_args()
- poem = open(poetry_file).read()
- factory = PoetryFactory(poem)
- from twisted.internet import reactor
- port = reactor.listenTCP(options.port or 9000, factory,
- interface=options.iface)
- print 'Serving %s on %s.' % (poetry_file, port.getHost())
- reactor.run()
- if __name__ == '__main__':
- main()
client side
- # This is the Twisted Get Poetry Now! client, version 3.0.
- # NOTE: This should not be used as the basis for production code.
- import optparse
- from twisted.internet.protocol import Protocol, ClientFactory
- def parse_args():
- usage = """usage: %prog [options] [hostname]:port ...
- This is the Get Poetry Now! client, Twisted version 3.0
- Run it like this:
- python get-poetry-1.py port1 port2 port3 ...
- """
- parser = optparse.OptionParser(usage)
- _, addresses = parser.parse_args()
- if not addresses:
- print parser.format_help()
- parser.exit()
- def parse_address(addr):
- if ':' not in addr:
- host = '127.0.0.1'
- port = addr
- else:
- host, port = addr.split(':', 1)
- if not port.isdigit():
- parser.error('Ports must be integers.')
- return host, int(port)
- return map(parse_address, addresses)
- class PoetryProtocol(Protocol):
- poem = ''
- def dataReceived(self, data):
- self.poem += data
- def connectionLost(self, reason):
- self.poemReceived(self.poem)
- def poemReceived(self, poem):
- self.factory.poem_finished(poem)
- class PoetryClientFactory(ClientFactory):
- protocol = PoetryProtocol
- def __init__(self, callback):
- self.callback = callback
- def poem_finished(self, poem):
- self.callback(poem)
- def get_poetry(host, port, callback):
- """
- Download a poem from the given host and port and invoke
- callback(poem)
- when the poem is complete.
- """
- from twisted.internet import reactor
- factory = PoetryClientFactory(callback)
- reactor.connectTCP(host, port, factory)
- def poetry_main():
- addresses = parse_args()
- from twisted.internet import reactor
- poems = []
- def got_poem(poem):
- poems.append(poem)
- if len(poems) == len(addresses):
- reactor.stop()
- for address in addresses:
- host, port = address
- get_poetry(host, port, got_poem)
- reactor.run()
- for poem in poems:
- print poem
- if __name__ == '__main__':
- poetry_main()
http://krondo.com/an-introduction-to-asynchronous-programming-and-twisted/
http://blog.csdn.net/hanhuili/article/details/9389433
SQLAlchemy 是编程语言下的一款 ORM 框架,该框架建立在数据库 API 之上,使用关系对象映射进行数据库操作,简言之便是:将对象转换成 SQL,然后使用数据 API 执行 SQL 并获取执行结果
Dialect 用于和数据 API 进行交流,根据配置文件的不同调用不同的数据库 API,从而实现对数据库的操作,如:
- MySQL-Python
- mysql+mysqldb://:@[:]/
- pymysql
- mysql+pymysql://:@/[?]
- MySQL-Connector
- mysql+mysqlconnector://:@[:]/
- cx_Oracle
- oracle+cx_oracle://user::port/dbname[?key=value&key=value...]
- 更多详见:http://docs.sqlalchemy.org/en/latest/dialects/index.html
步骤一:
使用 Engine/ConnectionPooling/Dialect 进行数据库操作,Engine 使用 ConnectionPooling 连接数据库,然后再通过 Dialect 执行 SQL 语句。
- #!/usr/bin/env python
- # -*- coding:utf-8 -*-
- from sqlalchemy import create_engine
- engine = create_engine("mysql+mysqldb://root::3306/s11", max_overflow=5)
- engine.execute(
- "INSERT INTO ts_test (a, b) VALUES ('2', 'v1')"
- )
- engine.execute(
- "INSERT INTO ts_test (a, b) VALUES (%s, %s)",
- ((555, "v1"),(666, "v1"),)
- )
- engine.execute(
- "INSERT INTO ts_test (a, b) VALUES (%(id)s, %(name)s)",
- id=999, name="v1"
- )
- result = engine.execute('select * from ts_test')
- result.fetchall()
步骤二:
使用 Schema Type/SQL Expression Language/Engine/ConnectionPooling/Dialect 进行数据库操作。Engine 使用 Schema Type 创建一个特定的结构对象,之后通过 SQL Expression Language 将该对象转换成 SQL 语句,然后通过 ConnectionPooling 连接数据库,再然后通过 Dialect 执行 SQL,并获取结果。
- #!/usr/bin/env python
- # -*- coding:utf-8 -*-
- from sqlalchemy import create_engine, Table, Column, Integer, String, MetaData, ForeignKey
- metadata = MetaData()
- user = Table('user', metadata,
- Column('id', Integer, primary_key=True),
- Column('name', String(20)),
- )
- color = Table('color', metadata,
- Column('id', Integer, primary_key=True),
- Column('name', String(20)),
- )
- engine = create_engine("mysql+mysqldb::3306/test", max_overflow=5)
- metadata.create_all(engine)
增删改查
- #!/usr/bin/env python
- # -*- coding:utf-8 -*-
- from sqlalchemy import create_engine, Table, Column, Integer, String, MetaData, ForeignKey
- metadata = MetaData()
- user = Table('user', metadata,
- Column('id', Integer, primary_key=True),
- Column('name', String(20)),
- )
- color = Table('color', metadata,
- Column('id', Integer, primary_key=True),
- Column('name', String(20)),
- )
- engine = create_engine("mysql+mysqldb://root::3306/s11", max_overflow=5)
- conn = engine.connect()
- # 创建SQL语句,INSERT INTO "user" (id, name) VALUES (:id, :name)
- conn.execute(user.insert(),{'id':7,'name':'seven'})
- conn.close()
- # sql = user.insert().values(id=123, name='wu')
- # conn.execute(sql)
- # conn.close()
- # sql = user.delete().where(user.c.id > 1)
- # sql = user.update().values(fullname=user.c.name)
- # sql = user.update().where(user.c.name == 'jack').values(name='ed')
- # sql = select([user, ])
- # sql = select([user.c.id, ])
- # sql = select([user.c.name, color.c.name]).where(user.c.id==color.c.id)
- # sql = select([user.c.name]).order_by(user.c.name)
- # sql = select([user]).group_by(user.c.name)
- # result = conn.execute(sql)
- # print result.fetchall()
- # conn.close()
一个简单的完整例子
- from sqlalchemy import create_engine
- from sqlalchemy.ext.declarative import declarative_base
- from sqlalchemy import Column, Integer, String
- from sqlalchemy.orm import sessionmaker
- Base = declarative_base() #生成一个SqlORM 基类
- engine = create_engine("mysql+mysqldb::3306/test",echo=False)
- class Host(Base):
- __tablename__ = 'hosts'
- id = Column(Integer,primary_key=True,autoincrement=True)
- hostname = Column(String(64),unique=True,nullable=False)
- ip_addr = Column(String(128),unique=True,nullable=False)
- port = Column(Integer,default=22)
- Base.metadata.create_all(engine) #创建所有表结构
- if __name__ == '__main__':
- SessionCls = sessionmaker(bind=engine) #创建与数据库的会话session class ,注意,这里返回给session的是个class,不是实例
- session = SessionCls()
- #h1 = Host(hostname='localhost',ip_addr='127.0.0.1')
- #h2 = Host(hostname='ubuntu',ip_addr='192.168.2.243',port=20000)
- #h3 = Host(hostname='ubuntu2',ip_addr='192.168.2.244',port=20000)
- #session.add(h3)
- #session.add_all( [h1,h2])
- #h2.hostname = 'ubuntu_test' #只要没提交,此时修改也没问题
- #session.rollback()
- #session.commit() #提交
- res = session.query(Host).filter(Host.hostname.in_(['ubuntu2','localhost'])).all()
- print(res)
更多内容详见:
http://www.jianshu.com/p/e6bba189fcbd
http://docs.sqlalchemy.org/en/latest/core/expression_api.html
注:SQLAlchemy 无法修改表结构,如果需要可以使用 SQLAlchemy 开发者开源的另外一个软件 Alembic 来完成。
步骤三:
使用 ORM/Schema Type/SQL Expression Language/Engine/ConnectionPooling/Dialect 所有组件对数据进行操作。根据类创建对象,对象转换成 SQL,执行 SQL。
- #!/usr/bin/env python
- # -*- coding:utf-8 -*-
- from sqlalchemy.ext.declarative import declarative_base
- from sqlalchemy import Column, Integer, String
- from sqlalchemy.orm import sessionmaker
- from sqlalchemy import create_engine
- engine = create_engine("mysql+mysqldb://root::3306/s11", max_overflow=5)
- Base = declarative_base()
- class User(Base):
- __tablename__ = 'users'
- id = Column(Integer, primary_key=True)
- name = Column(String(50))
- # 寻找Base的所有子类,按照子类的结构在数据库中生成对应的数据表信息
- # Base.metadata.create_all(engine)
- Session = sessionmaker(bind=engine)
- session = Session()
- # ########## 增 ##########
- # u = User(id=2, name='sb')
- # session.add(u)
- # session.add_all([
- # User(id=3, name='sb'),
- # User(id=4, name='sb')
- # ])
- # session.commit()
- # ########## 删除 ##########
- # session.query(User).filter(User.id > 2).delete()
- # session.commit()
- # ########## 修改 ##########
- # session.query(User).filter(User.id > 2).update({'cluster_id' : 0})
- # session.commit()
- # ########## 查 ##########
- # ret = session.query(User).filter_by(name='sb').first()
- # ret = session.query(User).filter_by(name='sb').all()
- # print ret
- # ret = session.query(User).filter(User.name.in_(['sb','bb'])).all()
- # print ret
- # ret = session.query(User.name.label('name_label')).all()
- # print ret,type(ret)
- # ret = session.query(User).order_by(User.id).all()
- # print ret
- # ret = session.query(User).order_by(User.id)[1:3]
- # print ret
- # session.commit()
A one to many relationship places a foreign key on the child table referencing the parent. is then specified on the parent, as referencing a collection of items represented by the child
- from sqlalchemy import Table, Column, Integer, ForeignKey
- from sqlalchemy.orm import relationship
- from sqlalchemy.ext.declarative import declarative_base
- Base = declarative_base()
- class Parent(Base):
- __tablename__ = 'parent'
- id = Column(Integer, primary_key=True)
- children = relationship("Child")
- class Child(Base):
- __tablename__ = 'child'
- id = Column(Integer, primary_key=True)
- parent_id = Column(Integer, ForeignKey('parent.id'))
To establish a bidirectional relationship in one-to-many, where the "reverse" side is a many to one, specify an additional and connect the two using the parameter:
- class Parent(Base):
- __tablename__ = 'parent'
- id = Column(Integer, primary_key=True)
- children = relationship("Child", back_populates="parent")
- class Child(Base):
- __tablename__ = 'child'
- id = Column(Integer, primary_key=True)
- parent_id = Column(Integer, ForeignKey('parent.id'))
- parent = relationship("Parent", back_populates="children")
will get a
- Child
attribute with many-to-one semantics.
- parent
Alternatively, the option may be used on a single instead of using:
- class Parent(Base):
- __tablename__ = 'parent'
- id = Column(Integer, primary_key=True)
- children = relationship("Child", backref="parent")
附,原生 sql join 查询
几个 Join 的区别 http://stackoverflow.com/questions/38549/difference-between-inner-and-outer-joins
- select host.id,
- hostname,
- ip_addr,
- port,
- host_group.name from host right join host_group on host.id = host_group.host_id
in SQLAchemy
- session.query(Host).join(Host.host_groups).filter(HostGroup.name=='t1').group_by("Host").all()
group by 查询
- select name,count(host.id) as NumberOfHosts from host right join host_group on host.id= host_group.host_id group by name;
in SQLAchemy
- from sqlalchemy import func
- session.query(HostGroup, func.count(HostGroup.name )).group_by(HostGroup.name).all()
- #another example
- session.query(func.count(User.name), User.name).group_by(User.name).all() SELECT count(users.name) AS count_1, users.name AS users_name
- FROM users GROUP BY users.name
题目: IO 多路复用版 FTP
需求:
题目:rpc 命令端
需求:
>>:run "df -h" --hosts 192.168.3.55 10.4.3.4
task id: 45334
>>: check_task 45334
>>:
来源: