text
stringlengths 256
65.5k
|
|---|
Example Alarm fail
irvinvplast edited by bucknall
Hello,
I'am using 1.8.6 in Sipy and the example says:
Traceback (most recent call last):
File "<stdin>", line 12, in _seconds_handler
AttributeError: 'Alarm' object has no attribute 'cancel'
What's the problem?
Example:
from machine import Timer
class Clock:
def __init__(self):
self.seconds = 0
self.__alarm = Timer.Alarm(self._seconds_handler, 1, periodic=True)
def _seconds_handler(self, alarm):
self.seconds += 1
print("%02d seconds have passed" % self.seconds)
if self.seconds == 10:
alarm.cancel() # stop counting after 10 seconds
clock = Clock()
robert-hhlast edited by
@VST-Admin The actual example in the in the examples section documentation mentions alarm.callback(None).
The recent firmware supports the cancel method.
So I recommend updating the firmware.
@irvinvp Why, after 3 years has the documentation NEVER been fixed?
irvinvplast edited by
The problem is
"alarm.cancel()"is not functional, use"alarm.callback(None)"
|
Generator trong Python
Trong bài viết này, Quantrimang sẽ cùng bạn tìm hiểu cách tạo Iterator bằng cách sử dụng Generator trong Python. Generator khác với iterator và các hàm thông thường như thế nào, tại sao ta nên sử dụng nó? Cùng tìm hiểu tất cả qua các nội dung sau.
Generator trong Python
Để xây dựng một iterator, ta cần phải theo dõi khá nhiều bước ví dụ như: triển khai class với phương thức __iter__() và __next__(), theo dõi các tình trạng bên trong, StopIteration xảy ra khi không có giá trị nào được trả về...
Generator được dùng để giải quyết các vấn đề này. Generator là cách đơn giản để tạo ra iterator.
Nói một cách đơn giản, generator là một hàm trả về một đối tượng (iterator) mà chúng ta có thể lặp lại (một giá trị tại một thời điểm). Chúng cũng tạo ra một đối tượng kiểu danh sách, nhưng bạn chỉ có thể duyệt qua các phần tử của generator một lần duy nhất vì generator không lưu dữ liệu trong bộ nhớ mà cứ mỗi lần lặp thì chúng sẽ tạo phần tử tiếp theo trong dãy và trả về phần tử đó.
Làm thể nào để tạo Generator trong Python
Để tạo generator trong Python, bạn sử dụng từ khóa def giống như khi định nghĩa một hàm. Trong generator, dùng câu lệnh yield để trả về các phần tử thay vì câu lệnh return như bình thường.
Nếu một hàm chứa ít nhất một yield (có thể có nhiều yield và thêm cả return) thì chắc chắn đây là một hàm generator. Trong trường hợp này, cả yield và return sẽ trả về các giá trị từ hàm.
Điều khác biệt ở đây là return sẽ chấm dứt hoàn toàn một hàm, còn yield sẽ chỉ tạm dừng các trạng thái bên trong hàm và sau đó vẫn có thể tiếp tục khi được gọi trong các lần sau.
Ví dụ, khi bạn gọi phương thức __next__() lần thứ nhất, generator thực hiện các công việc tính toán giá trị rồi gặp từ khóa yield nào thì nó sẽ trả về các phần tử tại ví trí đó, khi bạn gọi phương thức __next__() lần thứ hai thì generator không bắt đầu chạy tại vị trí đầu tiên mà bắt đầu ngay phía sau từ khóa yield thứ nhất. Cứ như thế generator tạo ra các phần tử trong dãy, cho đến khi không còn gặp từ khóa yield nào nữa thì giải phóng exception StopIteration.
Sự khác biệt của hàm generator và hàm thông thường
Đây là một số khác biệt giữa hàm generator và hàm thông thường:
Hàm generator chứa một hoặc nhiều câu lệnh yield.
Khi được gọi, generator trả về một đối tượng (iterator)nhưng không bắt đầu thực thi ngay lập tức.
Các phương thức như __iter__() và__next__()được triển khai tự động. Vì vậy, chúng ta có thể lặp qua các mục bằng cách sử dụngnext().
Yieldsẽ tạm dừng hàm, các biến cục bộ và trạng thái của chúng được ghi nhớ giữa các lệnh gọi liên tiếp. Mỗi lần lệnhyieldđược chạy, nó sẽ sinh ra một giá trị mới.
Cuối cùng, khi hàm kết thúc, StopIterationsẽ xảy ra nếu tiếp tục gọi hàm.
Dưới đây là một ví dụ để minh họa tất cả các điểm đã nêu ở trên. Chúng ta có một hàm generator có tên my_gen() với một số câu lệnh yield.
# Hàm generator đơn giản
# Viet boi Quantrimang.com
def my_gen():
n = 1
print('Doan text nay duoc in dau tien')
# Hàm Generator chứa các câu lệnh yield
yield n
n += 1
print('Doan text nay duoc in thu hai')
yield n
n += 1
print('Doan text nay duoc in cuoi cung')
yield n
Chạy chúng trong Python shell để xem output:
>>> # Trả về một đối tượng nhưng không bắt đầu thực thi ngay lập tức.
>>> a = my_gen()
>>> # Chúng ta có thể lặp qua các mục bằng cách sử dụng next().
>>> next(a)
Doan text nay duoc in dau tien
1
>>> # Yield sẽ tạm dừng hàm, quyền điều khiển chuyển đến người gọi
>>> # Các biến cục bộ và trạng thái của chúng được ghi nhớ giữa các
lệnh gọi liên tiếp.
>>> next(a)
Doan text nay duoc in thu hai
2
>>> next(a)
Doan text nay duoc in cuoi cung
3
>>> # Cuối cùng, khi hàm kết thúc, StopIteration sẽ xảy ra nếu tiếp
tục gọi hàm.
>>> next(a)
Traceback (most recent call last):
...
StopIteration
>>> next(a)
Traceback (most recent call last):
...
StopIteration
Có thể thấy trong ví dụ trên là giá trị của biến n được ghi nhớ giữa các lần gọi, không giống như các hàm bình thường kết thúc ngay sau mỗi lần gọi.
Khi bạn gọi phương thức next() lần thứ nhất, generator thực hiện các công việc tính toán giá trị rồi trả về phần tử tại ví trí đó, khi gọi phương thức next() lần thứ hai thì generator không bắt đầu chạy tại vị trí đầu tiên mà bắt đầu ngay phía sau từ khóa yield thứ nhất. Cứ như thế generator tạo ra các phần tử trong dãy, cho đến khi không còn gặp từ khóa yield nào nữa thì giải phóng exception StopIteration.
Để khởi động lại quá trình, tạo một đối tượng generator khác bằng cách sử dụng đối tượng như a = my_gen().
Lưu ý: Có thể sử dụng generator trực tiếp cho các vòng lặp for.
Vòng lặp for lấy một iterator và lặp lại nó bằng hàm next(), tự động kết thúc khi StopIteration xảy ra.
# Hàm generator đơn giản
def my_gen():
n = 1
print('Doan text nay duoc in dau tien')
# Hàm Generator chứa câu lệnh yield
yield n
n += 1
print('Doan text nay duoc in thu hai')
yield n
n += 1
print('Doan text nay duoc in cuoi cung')
yield n
# Sử dụng vòng lặp for
for item in my_gen():
print(item)
Chạy chương trình, kết quả trả về là:
Doan text nay duoc in dau tien 1 Doan text nay duoc in thu hai 2 Doan text nay duoc in cuoi cung 3
Generator với các vòng lặp trong Python
Ví dụ về generator đảo ngược chuỗi.
def rev_str(my_str):
length = len(my_str)
for i in range(length - 1,-1,-1):
yield my_str[i]
# Vòng lặp for đảo ngược chuỗi
# Viết bởi Quantrimang.com
# Output:
# o
# l
# l
# e
# h
for char in rev_str("hello"):
print(char)
Ví dụ này sử dụng hàm range() để lấy chỉ mục theo thứ tự ngược lại trong vòng lặp for.
Biểu thức generator
Generator có thể dễ dàng được tạo ra khi sử dụng biểu thức generator.
Giống như Lambda tạo một hàm vô danh trong Python, generator cũng tạo một biểu thức generator vô danh. Cú pháp tương tự như cú pháp của list comprehension, nhưng dấu ngoặc vuông được thay thế bằng dấu ngoặc tròn.
List comprehension thì trả về một list, còn biểu thức generator trả về một generator tại một thời điểm khi được yêu cầu. Vì lý do này, biểu thức generator sử dụng ít bộ nhớ hơn, đem lại hiệu quả hiệu suất hơn so với list comprehension tương đương.
# Khởi tạo danh sách
my_list = [1, 3, 6, 10]
# bình phương mỗi phần tử bằng cách sử dụng list comprehension
# Output: [1, 9, 36, 100]
[x**2 for x in my_list]
# kết quả tương tự khi sử dụng biểu thức generator
# Output: <generator object <genexpr> at 0x0000000002EBDAF8>
(x**2 for x in my_list)
Có thể thấy ở ví dụ trên, biểu thức generator không tạo ra kết quả cần thiết ngay lập tức mà trả về đối tượng generator, cứ mỗi lần lặp thì chúng sẽ tạo phần tử tiếp theo trong dãy và trả về phần tử đó.
# Khởi tạo danh sách
my_list = [1, 3, 6, 10]
a = (x**2 for x in my_list)
# Output: 1
print(next(a))
# Output: 9
print(next(a))
# Output: 36
print(next(a))
# Output: 100
print(next(a))
# Output: StopIteration
next(a)
Biểu thức generator sử dụng bên trong các hàm thì có thể bỏ qua các dấu ngoặc tròn.
>>> sum(x**2 for x in my_list)
146
>>> max(x**2 for x in my_list)
100
Tại sao nên sử dụng generator trong Python?
Việc sử dụng generator sẽ đem lại nhiều tác dụng hấp dẫn.
1. Đơn giản hóa code, dễ triển khai
Generator có thể giúp code được triển khai rõ ràng và ngắn gọn hơn so với lớp iterator tương tự. Để minh họa cho việc này, chúng ta sẽ lấy một ví dụ cụ thể.
class PowTwo:
def __init__(self, max = 0):
self.max = max
def __iter__(self):
self.n = 0
return self
def __next__(self):
if self.n > self.max:
raise StopIteration
result = 2 ** self.n
self.n += 1
return result
Đoạn code này khá dài. Bây giờ hãy thử sử dụng hàm generator.
def PowTwoGen(max = 0):
n = 0
while n < max:
yield 2 ** n
n += 1
Ở đây, generator thực hiện ngắn gọn và gọn gàng hơn nhiều.
2. Sử dụng ít bộ nhớ
Một hàm thông thường khi trả về list sẽ lưu toàn bộ list trong bộ nhớ. Trong phần lớn các trường hợp, điều đó là không hay khi phải sử dụng đến dung lượng bộ nhớ lớn đến vậy.
Generator sẽ sử dụng ít bộ nhớ hơn vì chúng chỉ thực sự tạo kết quả khi được gọi tới, sinh ra một phần tử tại một thời điểm, đem lại hiệu quả nếu chúng ta không có nhu cầu duyệt nó quá nhiều lần.
3.Tạo ra các list vô hạn
Generator là phương tiện tuyệt vời để tạo ra một luồng dữ liệu vô hạn. Các luồng vô hạn này không cần lưu trữ toàn bộ trong bộ nhớ vì generator chỉ tạo ra một phần tử tại một thời điểm, nên nó có thể biểu thị luồng dữ liệu vô hạn.
Ví dụ sau có thể tạo ra tất cả các số chẵn.
def all_even():
n = 0
while True:
yield n
n += 2
Nói chung, việc lựa chọn sử dụng generator phụ thuộc nhiều vào thực tế yêu cầu của công việc. Hãy suy nghĩ và lựa chọn cẩn thận để có được lựa chọn tốt nhất cho mình.
|
Process Discovery using Directly-Follows Graphs
Process models modeled using Petri nets have a well-defined semantic: a process execution starts from the places included in the initial marking and finishes at the places included in the final marking. In this section, another class of process models, Directly-Follows Graphs, are introduced. Directly-Follows graphs are graphs where the nodes represent the events/activities in the log and directed edges are present between nodes if there is at least a trace in the log where the source event/activity is followed by the target event/activity. On top of these directed edges, it is easy to represent metrics like frequency (counting the number of times the source event/activity is followed by the target event/activity) and performance (some aggregation, for example, the mean, of time inter-lapsed between the two events/activities).
We extract a Directly-Follows graph from the log running-example.xes.
To read the running-example.xes log, the following Python code could be used:
import os from pm4py.objects.log.importer.xes import factory as xes_importer log = xes_importer.import_log(os.path.join("tests","input_data","running-example.xes"))
Then, the following code could be used to extract a Directly-Follows graph from the log:
from pm4py.algo.discovery.dfg import factory as dfg_factory dfg = dfg_factory.apply(log)
A colored visualization of the Directly-Follows graph decorated with the frequency of activities and edges can be then obtained by using the following code:
from pm4py.visualization.dfg import factory as dfg_vis_factory gviz = dfg_vis_factory.apply(dfg, log=log, variant="frequency") dfg_vis_factory.view(gviz)
To get a Directly-Follows graph decorated with the performance between the edges, the following code can replace the previous two pieces of code. The specification of performance should be included in both the Directly-Follows application and the visualization part:
from pm4py.algo.discovery.dfg import factory as dfg_factory from pm4py.visualization.dfg import factory as dfg_vis_factory dfg = dfg_factory.apply(log, variant="performance") gviz = dfg_vis_factory.apply(dfg, log=log, variant="performance") dfg_vis_factory.view(gviz)
To save the DFG decorated with frequency or performance, instead of displaying it on screen, in svg format, the following code could be used:
from pm4py.algo.discovery.dfg import factory as dfg_factory from pm4py.visualization.dfg import factory as dfg_vis_factory dfg = dfg_factory.apply(log, variant="performance") parameters = {"format":"svg"} gviz = dfg_vis_factory.apply(dfg, log=log, variant="performance", parameters=parameters) dfg_vis_factory.save(gviz, "dfg.svg")
|
Hi guys
I have recently acquired a Tact Millennium amplifier. It's very old so the possibility of finding an original remote is slim.
Someone had uploaded a CCF file based on the remote to www.remotecentral.com.
I have utilised eventghost many times with a USB receiver, it's a great piece of software! I am wondering it would be possible handle said CCF file "blast" the codes from USB transmitter to the amp. If it works, I can then use the learning functioning on my universal remote to store them.
I am also looking into utilising the IR blaster on my Samsung S5 phone, but I am more familar with event ghost than Android.
Any advice or comments would be greatly appreciated!
Thanks
I have recently acquired a Tact Millennium amplifier. It's very old so the possibility of finding an original remote is slim.
Someone had uploaded a CCF file based on the remote to www.remotecentral.com.
I have utilised eventghost many times with a USB receiver, it's a great piece of software! I am wondering it would be possible handle said CCF file "blast" the codes from USB transmitter to the amp. If it works, I can then use the learning functioning on my universal remote to store them.
I am also looking into utilising the IR blaster on my Samsung S5 phone, but I am more familar with event ghost than Android.
Any advice or comments would be greatly appreciated!
Thanks
kgschlosser
Site Admin
Posts:5508
Joined:Fri Jun 05, 2015 5:43 am
Location:Rocky Mountains, Colorado USA
never mind about the ccf file
download and install this program
http://files.remotecentral.com/view/582 ... oedit.html
it is a pronto editor.
once you have the editor installed go ahead a run it. then open the ccf file
File --> Open Configuration
Once you have the file opened up it may ask you if you want to convert it. You can say ok, this is only happening because the file may be for a pronto remote that is different then one you selected when you first opened up the program.
you need to expand "Devices" in the left pane. then expand the device that is listed under it.
The items in []'s are GUI's that get displayed on the remote. each screen is going to have a bunch of different buttons for the device. so go ahead and double click on on of them.
You will see the various buttons. right click on one of the buttons and then click on properties. the Button Properties dialog will open up. The Action tab should be what you are seeing. on the left side there will be a vertical row of buttons and on the right there is going to be a white box.. in that white box there will one or more items in it. double click on one of the items. the Add IR dialog is going to open up. just above the OK button and alittle to the left there is a button "View IR" click on that button. you will now be able to see the pronto code for that button. that is what you will need to paste into the Transmit IR action in EG.
You are going to have to do this for each and every button on each of the available GUI screens. I know it is kind of a pain to do. but this is the only "free" way to do it. There is a program called Extract CCF that will spit out all of the pronto codes but I believe you have to pay for it.
download and install this program
http://files.remotecentral.com/view/582 ... oedit.html
it is a pronto editor.
once you have the editor installed go ahead a run it. then open the ccf file
File --> Open Configuration
Once you have the file opened up it may ask you if you want to convert it. You can say ok, this is only happening because the file may be for a pronto remote that is different then one you selected when you first opened up the program.
you need to expand "Devices" in the left pane. then expand the device that is listed under it.
The items in []'s are GUI's that get displayed on the remote. each screen is going to have a bunch of different buttons for the device. so go ahead and double click on on of them.
You will see the various buttons. right click on one of the buttons and then click on properties. the Button Properties dialog will open up. The Action tab should be what you are seeing. on the left side there will be a vertical row of buttons and on the right there is going to be a white box.. in that white box there will one or more items in it. double click on one of the items. the Add IR dialog is going to open up. just above the OK button and alittle to the left there is a button "View IR" click on that button. you will now be able to see the pronto code for that button. that is what you will need to paste into the Transmit IR action in EG.
You are going to have to do this for each and every button on each of the available GUI screens. I know it is kind of a pain to do. but this is the only "free" way to do it. There is a program called Extract CCF that will spit out all of the pronto codes but I believe you have to pay for it.
kgschlosser
Site Admin
Posts:5508
Joined:Fri Jun 05, 2015 5:43 am
Location:Rocky Mountains, Colorado USA
OK so scratch the above directions as that is going to be a royal pain
I have a better way.
create a folder on your desktop called "ccf_converter"
unzip the attached file into that folder. there is only a single dll in the file.
place any ccf files you want converted into that folder as well. It will bulk extract the pronto codes and button names.
create a new macro in EG add a Python Script action. Paste the code below into that script. Click on "Apply" and then click on "Test" it will dump all of the button names and pronto codes into the EG log.
I have a better way.
create a folder on your desktop called "ccf_converter"
unzip the attached file into that folder. there is only a single dll in the file.
place any ccf files you want converted into that folder as well. It will bulk extract the pronto codes and button names.
create a new macro in EG add a Python Script action. Paste the code below into that script. Click on "Apply" and then click on "Test" it will dump all of the button names and pronto codes into the EG log.
Code: Select all
import ctypes
from ctypes.wintypes import INT
import os
import shutil
CCF_DUMP_CODES = 0x00000010
path = os.path.join(os.path.expanduser('~'), 'desktop', 'ccf_converter')
hCCFDll = ctypes.cdll.LoadLibrary(os.path.join(path, 'CCFDll.dll'))
CCFRunDumper = hCCFDll.CCFRunDumper
CCFRunDumper.restype = ctypes.POINTER(INT)
for ccf_file in os.listdir(path):
if not ccf_file.endswith('.ccf'):
continue
print ccf_file
szInputCCF = ctypes.create_string_buffer(os.path.join(path, ccf_file))
szOutputDirectory = ctypes.create_string_buffer(path)
DumpFlags = INT(CCF_DUMP_CODES)
CCFRunDumper(szInputCCF, szOutputDirectory, DumpFlags)
for code_file in os.listdir(os.path.join(path, 'codes')):
code_file = os.path.join(os.path.join(path, 'codes', code_file))
with open(code_file, 'r') as f:
data = f.read()
data = data.split('</tr>')[2:-2]
code = [line.strip() for line in data[0].split('\n') if line.strip()][-1]
code = code.split('","')[-1].split('")')[0]
name = [line.strip() for line in data[-1].split('\n') if line.strip()][-2]
name = name.split('">')[-1].split('</')[0]
print name, ':', code
print
print
shutil.rmtree(os.path.join(path, 'codes'))
Attachments
CCFDll.zip
(393.88 KiB) Downloaded 63 times
I've managed to open the file using CCF extractor. However it was made in 2002 and is non standard somehow, so won't convert automatically. However CCF extractor could still view the codes, which I then copy and pasted into iRPlus online xml convertor...
https://irplus-remote.github.io/converter/rcentral.html
Said file loaded into iRplus android app OK, utilising my S5's iR blaster it works The Robman over at JP-1 remote has kindly cleaned up the codes so that they work better and it looks like the problem is solved. I've piggy backed the buttons onto a couple of One 4 All learning remotes for safe keeping.
Thanks for the informative posts guys, very helpful!
https://irplus-remote.github.io/converter/rcentral.html
Said file loaded into iRplus android app OK, utilising my S5's iR blaster it works The Robman over at JP-1 remote has kindly cleaned up the codes so that they work better and it looks like the problem is solved. I've piggy backed the buttons onto a couple of One 4 All learning remotes for safe keeping.
Thanks for the informative posts guys, very helpful!
kgschlosser
Site Admin
Posts:5508
Joined:Fri Jun 05, 2015 5:43 am
Location:Rocky Mountains, Colorado USA
those one4all remotes are great aren't they?. I have a bunch of them. they are well constructed solid remotes and you can hack them.. after all if you can't hack it you don't own it
|
3D Projection Tutorial
I am going to teach you how to draw a 3D object.
I'm going to start with some useful information about the project bellow. First, it takes about 10 - 15 seconds to generate the map. Second, use WASD to move left, right, forward, and down and use space and right shift(not left shift) to go up and down.
To start, the only type math used in this tutorial is linear equations. You will still be able to use this tutorial and render your own 3D objects if you don't know linear equations but it may be more challenging and you likely won't understand how 3D projections work. Ok, now that were done with that, lets figure out how to project a 3D point onto a plane(a flat 2D surface). The reason I used the word project is because you take the point in 3D space, than you draw a line to the player's eye. From here you find the x and y point at z 80. Any position at z 80 is going to be referred to as the canvas. The number 80 after z would be represented by a variable player_fov aka the players field of view. This variable will be used in the equations. Than from there you would draw that point at that position on the screen. The way to sum that up would be your projecting a beam of light from a 3D point and seeing where it hits the players eye. Now lets get into the math that projects the object.
Here's a good image that helps shows what my program dose to project a vector3(x, y, z) point onto the canvas. The link to the website the image was taken from is right bellow the image and it also was the most helpful website for me when it came to learning how 3D projections work:Computing the Pixel Coordinates of a 3D Point
And this gif further explains it:
Step 1, the first step is to remove b(the y intercept) from the equation:
y = mx + b
This will make projecting the object easier. To do this I am going to use a few variables, object_x(the x position of the object), object_y(the y position of the object), object_z(the z position of the object) and player_x(the players x position), player_y(the players y position), player_z(the players z position). The outputs will be new_object_x, new_object_y, and new_object_z. Those outputs will be used in all the equations. Now onto the equation to remove b:
new_object_x = player_x - object_x
new_object_y = player_y - object_y
new_object_z = player_z - object_z
Step 2, now we will find the y position of collision on the canvas(defined at the top). The output will be screen_y and the variable m is the slope. The equation to do this is:
m = new_object_y / new_object_z
screen_y = m * player_fov + object_y
Step 3, this is the final step and gets the x position of collision on the canvas. The output will be screen_x and the variable m is the slope. The equation is:
m = new_object_x / new_object_z
screen_x = m * player_fov + object_x
If you don't want to write any code yourself than this is the code to project a 3D point written in python3:
def convert_3D_to_2D(vec3_pos):
vec3_dist = [vec3_player_pos[0] - vec3_pos[0], vec3_player_pos[1] - vec3_pos[1], vec3_player_pos[2] - vec3_pos[2]]
try:
m = vec3_dist[1] / vec3_dist[2]
except ZeroDivisionError:
m = 0
y = m * player_fov + vec3_pos[1]
try:
m2 = vec3_dist[0] / vec3_dist[2]
except ZeroDivisionError:
m2 = 0
y2 = m2 * player_fov + vec3_pos[0]
return [y2, y]
You now are done with the math! And congratulations on finishing the tutorial! I hope this worked for you and if you have any questions or need help than feel free to ask in the comments.
Here's some images of a 3D map made using perline noise rendered using this 3D projection method and ran without a maximum render distance on my computer(not on repl):
And here's some more images of the world(not ran on repl) using my ray tracing lighting system that traces a ray from each point to the sun and check to see if it collides with anything. This feature is to slow for repl and will not be implemented. In this image I was getting around 0.3 fps:
The project bellow is the same code as used to create those images except repl runs pygame very slowly and therefore I had to implement a maximum render distance to allow you to move around the map in real time. Another feature in my project is a smoothish lighting system. Basically I subtract the height of one of the three points on the triangle being rendered from another. Than I used the clamp function shown bellow to limit how much darker or brighter an object could be. This function is also written in python3:
def clamp(value, min_, max_):
return min(max(value, min_), max_)
After doing this I add the new value that I generated to the r, g, and b color before clamping each of the values to be within 0 - 255 using the same function shown above. I also have one more tip that may help, only draw a triangle if all of the three points x and y positions are grater than 0 otherwise there will be weird lines when you pass an object. One final feature in my program is a noise function. This creates the terrain that is rendered to the screen smoothly. First off, this is not an actual perline noise algorithm but it still creates smooth terrain. The way this function works on a basic level is it goes from the bottom left to the top right and on the way it gets the average height level of the terrain around it(If there is no terrain around it than it sets its height to a random value). It than choses a random value from a range of values till the value is a certain range(this range is random making the heights of the terrain even more random) away from the average terrain height. When it finds this height than it adds the height to a 2D array. I also added a chance to spawn a peak and if it dose than the height is forced to be a lot higher than the average creating a jump in the height of the terrain around it. That's all it takes to recreate the noise function. A tip to create a 3D game is to use a mesh(a list of shape positions that creates and object when rendered) and use a triangle as the shape it renders. Another useful tip is to order the terrain from top left to top bottom so when rendered the terrain will not overlap and create a weird visual bug. I will be updating this tutorial once I added in camera rotation so you to can look around you world at more angles than now. I have also started a ray tracer and have put the prototype on repl, the link is bellow. And finally, with all this new found information, you should be well enough equipped to create your own 3D game or game engine from scratch. Good luck!
Interesting facts:
There are 22500 polygons(triangles) in each map
There are 67500 vector3(x, y, z) points in the entire map
Good sources from learning more about 3D projections and the math behind it:
|
A Python Tutorial, the Basics
ð A very easy Python Tutorial! ð
#Tutorial Jam
@elipie's jam p i n g
p i n g
Here is a basic tutorial for Python, for beginners!
Table of Contents:
1. The developer of python
2. Comments/Hashtags
3. Print and input statements
f' strings
4. If, Elif, Else statements
5. Common Modules
1. Developer of Python
It was created in the late 1980s by Guido van Rossum in the Netherlands. It was made as a successor to the ABC language, capable interfacing with the Ameoba operating system. It's name is python because when he was thinking about Python, he was also reading, 'Monty Python's Flying Circus'. Guido van Rossum thought that the langauge would need a short, unique name, so he chose Python.
For more about Guido van Rossum, click here
2. Comments/Hastags
Comments are side notes you can write in python. They can be used, as I said before:
sidenotes
instructions or steps
etc.
How to write comments:
#This is a comment
The output is nothing because:
It is a comment and comments are invisible to the computer
Comments are not printed in Python
So just to make sure, hastags are used to make comments. And remember, comments are ignored by the computer.
3. Print and Input statements
1. Print Statements
Print statements, printed as print, are statements used to print sentences or words. So for example:
print("Hello World!")
The output would be:
Hello World!
So you can see that the print statement is used to print words or sentences.
2. Input Statements
Input statements, printed as input, are statements used to 'ask'. For example:
input("What is your name?")
The output would be:
What is your name?
However, with inputs, you can write in them. You can also 'name' the input. Like this:
name = input("What is your name?")
You could respond by doing this:
What is your name? JBYT27
So pretty much, inputs are used to make a value that you can make later.
Then you could add a if statement, but lets discuss that later.
3. f strings
f strings, printed as f(before a qoutation), is used to print or input a value already used. So what I mean is, say I put an f string on a print statement. Like this:
print(f"")
The output right now, is nothing. You didn't print anything. But say you add this:
print(f"Hello {name}!")
It would work, only if the name was named. In other words, say you had a input before and you did this to it:
name = input()
Then the f string would work. Say for the input, you put in your name. Then when the print statement would print:
Hello (whatever your name was)!
Another way you could do this is with commas. This won't use an f string either. They are also simaler. So how you would print it is like this:
name = input()
...
print("Hello ", name, "!")
The output would be the same as well! The commas seperate the 2 strings and they add the name inside. But JBYT27, why not a plus sign? Well, this question you would have to ask Guido van Rossum, but I guess I can answer it a bit. It's really the python syntax. The syntax was made that so when you did a plus sign, not a comma, it would give you an error.
Really, the only time you would use this is to give back your name, or to find if one value was equal to each other, which we'll learn in a sec.
4. If, Elif, Else Statements
1. If Statements
If statements, printed as if, are literally as they are called, if sentences. They see if a sentence equals or is something to an object, it creates an effect. You could think an if statement as a cause and effect. An example of a if statement is:
name = input("What is your name?")
#asking for name
if name == "JBYT27":
print("Hello Administrator!")
The output could be:
What is your name? JBYT27Hello Administrator!
However, say it isn't JBYT27. This is where the else, elif, try, and except statements comes in!
2. Elif Statements
Elif statements, printed as elif are pretty much if statements. It's just that the word else and if are combined. So say you wanted to add more if statements. Then you would do this:
if name == "JBYT27":
print("Hello Administrator!")
elif name == "Code":
print("Hello Code!")
It's just adding more if statements, just adding a else to it!
3. Else Statements
Else statments, printed as else, are like if and elif statements. They are used to tell the computer that if something is not that and it's not that, go to this other result. You can use it like this (following up from the other upper code):
if name == "JBYT27":
print("Hello admin!")
elif name == "Squid":
print("Hello Lord Squod!")
else:
print(f"Hello {name}!")
5. Common Modules
Common modules include:
os
time
math
sys
replit
turtle
tkinter
random
etc.
So all these modules that I listed, i'll tell you how to use, step by step! ;) But wait, what are modules?
Modules are like packages that are pre-installed in python. You just have to fully install it, which is the module(please correct me if im wrong). So like this code:
import os
...
When you do this, you successfully import the os module! But wait, what can you do with it? The most common way people use the os module is to clear the page. By means, it clears the console(the black part) so it makes your screen clear-er. But, since there are many, many, many modules, you can also clear the screen using the replit module. The code is like this:
import replit
...
replit.clear()
But one amazing thing about this importing is you can make things specific. Like say you only want to import pi and sqrt from the math package. This is the code:
from math import pi, sqrt
Let me mention that when you do this, never, ever add an and. Like from ... import ... and .... That is just horrible and stupid and... Just don't do it :)
Next is the time module
You can use the time module for:
time delay
scroll text
And yeah, that's pretty much it (i think)
Note:
All of the import syntax is the same except for the names
Next is tkinter, turtle
You can use the tkinter module for GUI's (screen playing), you can import it in a normal python, or you can do this in a new repl.
You can use the turtle for drawing, it isn't used much for web developing though.
The math and sys
The math is used for math calculations, to calculate math. The sys is used for accessing used variables. I don't really know how I could explain it to you, but for more, click here
Random
The random module is used for randomizing variables and strings. Say you wanted to randomize a list. Here would be the code:
import random
...
a_list = ["JBYT27","pie","cat","dog"]
...
random.choice(a_list)
The output would be a random choice from the variable/list. So it could be pie, JBYT27, cat, or dog. From the random module, there are many things you can import, but the most common are:
choice
randrange
etc.
And that's all for modules. If you want links, click below.
Links for modules:
And that's it!
Hooray! We made it through without sleeping!
Credits to:
Many coders for tutorials
Books and websites
replit
etc.
Links:
Web links:
ranging from a few days or hoursifyoulikereading
Video links:
ranging from 1-12 hoursifyoudon'tlike reading
Otherwise:
ranging from 5 hours-a few daysreplittutorial links
I hope you enjoyed this tutorial! I'll cya on the next post!
stay safe!
You are viewing a single comment. View All
|
Hi guys
I have recently acquired a Tact Millennium amplifier. It's very old so the possibility of finding an original remote is slim.
Someone had uploaded a CCF file based on the remote to www.remotecentral.com.
I have utilised eventghost many times with a USB receiver, it's a great piece of software! I am wondering it would be possible handle said CCF file "blast" the codes from USB transmitter to the amp. If it works, I can then use the learning functioning on my universal remote to store them.
I am also looking into utilising the IR blaster on my Samsung S5 phone, but I am more familar with event ghost than Android.
Any advice or comments would be greatly appreciated!
Thanks
I have recently acquired a Tact Millennium amplifier. It's very old so the possibility of finding an original remote is slim.
Someone had uploaded a CCF file based on the remote to www.remotecentral.com.
I have utilised eventghost many times with a USB receiver, it's a great piece of software! I am wondering it would be possible handle said CCF file "blast" the codes from USB transmitter to the amp. If it works, I can then use the learning functioning on my universal remote to store them.
I am also looking into utilising the IR blaster on my Samsung S5 phone, but I am more familar with event ghost than Android.
Any advice or comments would be greatly appreciated!
Thanks
kgschlosser
Site Admin
Posts:5508
Joined:Fri Jun 05, 2015 5:43 am
Location:Rocky Mountains, Colorado USA
never mind about the ccf file
download and install this program
http://files.remotecentral.com/view/582 ... oedit.html
it is a pronto editor.
once you have the editor installed go ahead a run it. then open the ccf file
File --> Open Configuration
Once you have the file opened up it may ask you if you want to convert it. You can say ok, this is only happening because the file may be for a pronto remote that is different then one you selected when you first opened up the program.
you need to expand "Devices" in the left pane. then expand the device that is listed under it.
The items in []'s are GUI's that get displayed on the remote. each screen is going to have a bunch of different buttons for the device. so go ahead and double click on on of them.
You will see the various buttons. right click on one of the buttons and then click on properties. the Button Properties dialog will open up. The Action tab should be what you are seeing. on the left side there will be a vertical row of buttons and on the right there is going to be a white box.. in that white box there will one or more items in it. double click on one of the items. the Add IR dialog is going to open up. just above the OK button and alittle to the left there is a button "View IR" click on that button. you will now be able to see the pronto code for that button. that is what you will need to paste into the Transmit IR action in EG.
You are going to have to do this for each and every button on each of the available GUI screens. I know it is kind of a pain to do. but this is the only "free" way to do it. There is a program called Extract CCF that will spit out all of the pronto codes but I believe you have to pay for it.
download and install this program
http://files.remotecentral.com/view/582 ... oedit.html
it is a pronto editor.
once you have the editor installed go ahead a run it. then open the ccf file
File --> Open Configuration
Once you have the file opened up it may ask you if you want to convert it. You can say ok, this is only happening because the file may be for a pronto remote that is different then one you selected when you first opened up the program.
you need to expand "Devices" in the left pane. then expand the device that is listed under it.
The items in []'s are GUI's that get displayed on the remote. each screen is going to have a bunch of different buttons for the device. so go ahead and double click on on of them.
You will see the various buttons. right click on one of the buttons and then click on properties. the Button Properties dialog will open up. The Action tab should be what you are seeing. on the left side there will be a vertical row of buttons and on the right there is going to be a white box.. in that white box there will one or more items in it. double click on one of the items. the Add IR dialog is going to open up. just above the OK button and alittle to the left there is a button "View IR" click on that button. you will now be able to see the pronto code for that button. that is what you will need to paste into the Transmit IR action in EG.
You are going to have to do this for each and every button on each of the available GUI screens. I know it is kind of a pain to do. but this is the only "free" way to do it. There is a program called Extract CCF that will spit out all of the pronto codes but I believe you have to pay for it.
kgschlosser
Site Admin
Posts:5508
Joined:Fri Jun 05, 2015 5:43 am
Location:Rocky Mountains, Colorado USA
OK so scratch the above directions as that is going to be a royal pain
I have a better way.
create a folder on your desktop called "ccf_converter"
unzip the attached file into that folder. there is only a single dll in the file.
place any ccf files you want converted into that folder as well. It will bulk extract the pronto codes and button names.
create a new macro in EG add a Python Script action. Paste the code below into that script. Click on "Apply" and then click on "Test" it will dump all of the button names and pronto codes into the EG log.
I have a better way.
create a folder on your desktop called "ccf_converter"
unzip the attached file into that folder. there is only a single dll in the file.
place any ccf files you want converted into that folder as well. It will bulk extract the pronto codes and button names.
create a new macro in EG add a Python Script action. Paste the code below into that script. Click on "Apply" and then click on "Test" it will dump all of the button names and pronto codes into the EG log.
Code: Select all
import ctypes
from ctypes.wintypes import INT
import os
import shutil
CCF_DUMP_CODES = 0x00000010
path = os.path.join(os.path.expanduser('~'), 'desktop', 'ccf_converter')
hCCFDll = ctypes.cdll.LoadLibrary(os.path.join(path, 'CCFDll.dll'))
CCFRunDumper = hCCFDll.CCFRunDumper
CCFRunDumper.restype = ctypes.POINTER(INT)
for ccf_file in os.listdir(path):
if not ccf_file.endswith('.ccf'):
continue
print ccf_file
szInputCCF = ctypes.create_string_buffer(os.path.join(path, ccf_file))
szOutputDirectory = ctypes.create_string_buffer(path)
DumpFlags = INT(CCF_DUMP_CODES)
CCFRunDumper(szInputCCF, szOutputDirectory, DumpFlags)
for code_file in os.listdir(os.path.join(path, 'codes')):
code_file = os.path.join(os.path.join(path, 'codes', code_file))
with open(code_file, 'r') as f:
data = f.read()
data = data.split('</tr>')[2:-2]
code = [line.strip() for line in data[0].split('\n') if line.strip()][-1]
code = code.split('","')[-1].split('")')[0]
name = [line.strip() for line in data[-1].split('\n') if line.strip()][-2]
name = name.split('">')[-1].split('</')[0]
print name, ':', code
print
print
shutil.rmtree(os.path.join(path, 'codes'))
Attachments
CCFDll.zip
(393.88 KiB) Downloaded 64 times
I've managed to open the file using CCF extractor. However it was made in 2002 and is non standard somehow, so won't convert automatically. However CCF extractor could still view the codes, which I then copy and pasted into iRPlus online xml convertor...
https://irplus-remote.github.io/converter/rcentral.html
Said file loaded into iRplus android app OK, utilising my S5's iR blaster it works The Robman over at JP-1 remote has kindly cleaned up the codes so that they work better and it looks like the problem is solved. I've piggy backed the buttons onto a couple of One 4 All learning remotes for safe keeping.
Thanks for the informative posts guys, very helpful!
https://irplus-remote.github.io/converter/rcentral.html
Said file loaded into iRplus android app OK, utilising my S5's iR blaster it works The Robman over at JP-1 remote has kindly cleaned up the codes so that they work better and it looks like the problem is solved. I've piggy backed the buttons onto a couple of One 4 All learning remotes for safe keeping.
Thanks for the informative posts guys, very helpful!
kgschlosser
Site Admin
Posts:5508
Joined:Fri Jun 05, 2015 5:43 am
Location:Rocky Mountains, Colorado USA
those one4all remotes are great aren't they?. I have a bunch of them. they are well constructed solid remotes and you can hack them.. after all if you can't hack it you don't own it
|
En esta entrada veremos conceptos fundamentales de iptables, y las principales polÃticas a adoptar. El funcionamiento de iptables es muy sencillo, si tenemos en cuenta esta serie de conceptos:
1-. Dado que todos los paquetes han de pasar por el kernel, con iptables podemos definir una serie de REGLAS (RULES) por las que, inexorablemente, han de pasar dichos paquetes. AsÃ, definiremos una REGLA como el conjunto de parámetros que definen -o pueden definir- un determinado paquete, según diversos criterios.
Ejemplos de estos criterios pueden ser el tipo de protocolo (TCP; UDP, HTTP; SSH...), la dirección de IP (de origen y destino), o el estado del paquete (NEW, ESTABLISHED, INVALID...)
2-. Todas las reglas estarán contenidas en CADENAS (CHAINS); podemos definir una cadena como un conjunto de reglas que pueden coincidir con un conjunto de paquetes. Cada regla especifica qué hacer con un paquete coincidente. Esto es lo que se denomina `target', que puede ser un salto a una cadena definida por el usuario en la misma tabla.
3-. Todas las cadenas estarán a su vez contenidas en TABLAS (TABLES); podemos definir una tabla como un conjunto de cadenas, ya estén empotradas o hayan sido definidas por el usuario.
Es de suma importancia, por otro lado, tener en cuenta que las reglas se aplicarán antes o después del enrutado del paquete, dependiendo de las tablas o cadenas que contienen las reglas.
Asumido lo anterior, podemos comenzar a aplicar
polÃticas, entendiendo como tales cada una de las decisiones que se toman para cada paquete, atendiendo a las siguientes premisas:
1-. Si un paquete coincide con una regla se le aplicará lo que define dicha regla.
2-. Cuando dicho paquete NO coincide con una regla será pasado a la siguiente regla.
3-. Si el paquete NO coincide con ninguna regla será de aplicación lo que defina la CADENApor defecto.
Consideremos las polÃticas más comunes:
ACCEPT El paquete es aceptado.
DROP El paquete es rechazado.
QUEUE El paquete pasa a espacio de usuario.
RETURN Se detiene el paso del paquete en la cadena actual y se continúa con la siguiente regla de la cadena anterior.
Otra bastante común será la target MASQUERADE, pero ojo con lo que dice el man iptables-extensions acerca de ella: This target is only valid in the nat table, in the POSTROUTING chain. It should only be used with dynamically assigned IP (dialup) connections: if you have static IP address, you should use the SNAT target.
Otras : ver man iptables-extensions (requieren generalmente la opción [-m módulo], aunque si se define la opción [-p protocolo] iptables tratará de cargar un módulo con el mismo nombre que el protocolo y buscará en él la opción adecuada (si existe). Ej:
llama al módulo comment para incluir un comentario a una regla (256 caracteres máx.)iptables -A INPUT -i eth1 -m comment --comment "my local LAN"
Por defecto, en una nueva instalación, las polÃticas predefinidas son ACCEPT. Prueba a hacer
# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0K packets, 0 bytes)
pkts bytes target prot opt in out source destination
Como se observa, el firewall está inactivo.
|
Trouble verifying modularity properties
Hey all,
I'm having a bit of trouble doing some numerical things with modular forms, and I simply can't figure out where I'm going wrong.
The $j$ function should satisfy $j(\gamma \tau) = j(\tau)$ for every $\tau$ in the upper half plane and every $\gamma\in SL_2(\mathbb{Z})$. I wrote some code to numerically compute some values of the $j$ function (there may be a better way to do it for the $j$ function, but I hope to eventually migrate this code to work for other modular forms which I define). Here it is below.
## Starts Laurent series in q
R.<q> = LaurentSeriesRing(QQ)
I = CC.0 # imaginary unit
precision = 75
##evaluates a function using its q-expansion
def evaluate(f,z):
result = 0
coeffs = f.coefficients()
exps = f.exponents()
for i in range(0,len(coeffs)):
result = result + coeffs[i]*z^(exps[i])
return result
## computes the action of a member of the modular group on tau in the upper half plane
def action(gamma,tau):
return ((gamma[0]*tau + gamma[1])/(gamma[2]*tau + gamma[3]))
## Produce Eisenstein series with specified weight and precision of q-expansion
def eis(weight,precision):
t = EisensteinForms(1,weight)
t.set_precision(precision)
t = t.eisenstein_series()
e = t[0].q_expansion()
return e*(1/e.list()[0])
## gives you q which corresponds to tau
def qt(tau):
return exp(2*pi*I*tau)
## Defining delta cusp form
delta = CuspForms(1,12).0
delta = delta.q_expansion(precision)
# Computes j function
g2 = 60*eis(4,precision)/240
j = 1728*g2^3/(27*delta)
Now when I run the following code:
tau = 1+I
gamma = [3,-1,4,-1]
print(evaluate(j,qt(tau)).n()) #j(tau)
print(evaluate(j,qt(action(gamma,tau))).n()) #j(gamma tau)
the values $j(\tau)$ and $j(\gamma\tau)$ are not equal! I would appreciate any help.
|
microsoftml.rx_fast_linear: Linear Model with Stochastic Dual Coordinate Ascent
Usage
microsoftml.rx_fast_linear()
Description
A Stochastic Dual Coordinate Ascent (SDCA) optimization trainer for linear binary classification and regression.
Details
rx_fast_linear is a trainer based on the Stochastic Dual Coordinate Ascent(SDCA) method, a state-of-the-art optimization technique for convex objectivefunctions. The algorithm can be scaled for use on large out-of-memory datasets due to a semi-asynchronized implementation that supports multi-threading.Convergence is underwritten by periodically enforcing synchronization betweenprimal and dual updates in a separate thread. Several choices of lossfunctions are also provided. The SDCA method combines several of thebest properties and capabilities of logistic regression and SVM algorithms.For more information on SDCA, see the citations in the reference section.
Traditional optimization algorithms, such as stochastic gradient descent (SGD), optimize the empirical loss function directly. The SDCA chooses a different approach that optimizes the dual problem instead. The dual loss function is parametrized by per-example weights. In each iteration, when a training example from the training data set is read, the corresponding example weight is adjusted so that the dual loss function is optimized with respect to the current example. No learning rate is needed by SDCA to determine step size as is required by various gradient descent methods.
rx_fast_linear supports binary classification with three types ofloss functions currently: Log loss, hinge loss, and smoothed hinge loss.Linear regression also supports with squared loss function. Elastic netregularization can be specified by the l2_weight and l1_weightparameters. Note that the l2_weight has an effect on the rate ofconvergence. In general, the larger the l2_weight, the fasterSDCA converges.
Note that rx_fast_linear is a stochastic and streaming optimizationalgorithm. The results depend on the order of the training data. Forreproducible results, it is recommended that one sets shuffle toFalse and train_threads to 1.
Arguments
formula
The formula described in revoscalepy.rx_formula.Interaction terms and F() are not currently supported inmicrosoftml.
data
A data source object or a character string specifying a.xdf file or a data frame object.
method
Specifies the model type with a character string: "binary"for the default binary classification or "regression" for linearregression.
loss_function
Specifies the empirical loss function to optimize. For binary classification, the following choices are available:
log_loss: The log-loss. This is the default.
hinge_loss: The SVM hinge loss. Its parameter represents the margin size.
smooth_hinge_loss: The smoothed hinge loss. Its parameter represents the smoothing constant.
For linear regression, squared loss squared_loss iscurrently supported. When this parameter is set to None, itsdefault value depends on the type of learning:
The following example changes the loss_function tohinge_loss:rx_fast_linear(..., loss_function=hinge_loss()).
l1_weight
Specifies the L1 regularization weight. The value must beeither non-negative or None. If None is specified, theactual value is automatically computed based on data set. Noneis the default value.
l2_weight
Specifies the L2 regularization weight. The value must beeither non-negative or None. If None is specified, theactual value is automatically computed based on data set. Noneis the default value.
train_threads
Specifies how many concurrent threads can be used to runthe algorithm. When this parameter is set to None, the number ofthreads used is determined based on the number of logical processorsavailable to the process as well as the sparsity of data. Set it to 1to run the algorithm in a single thread.
convergence_tolerance
Specifies the tolerance threshold used as aconvergence criterion. It must be between 0 and 1. The default value is0.1. The algorithm is considered to have converged if the relativeduality gap, which is the ratio between the duality gap and the primal loss,falls below the specified convergence tolerance.
max_iterations
Specifies an upper bound on the number of trainingiterations. This parameter must be positive or None. If Noneis specified, the actual value is automatically computed based on data set.Each iteration requires a complete pass over the training data. Trainingterminates after the total number of iterations reaches the specifiedupper bound or when the loss function converges, whichever happens earlier.
shuffle
Specifies whether to shuffle the training data. Set Trueto shuffle the data; False not to shuffle. The defaultvalue is True. SDCA is a stochastic optimization algorithm. Ifshuffling is turned on, the training data is shuffled on each iteration.
check_frequency
The number of iterations after which the loss functionis computed and checked to determine whether it has converged. The valuespecified must be a positive integer or None. If None,the actual value is automatically computed based on data set. Otherwise,for example, if checkFrequency = 5 is specified, then the lossfunction is computed and convergence is checked every 5 iterations. Thecomputation of the loss function requires a separate complete pass overthe training data.
normalize
Specifies the type of automatic normalization used:
"Auto": if normalization is needed, it is performed automatically. This is the default choice.
"No": no normalization is performed.
"Yes": normalization is performed.
"Warn": if normalization is needed, a warning message is displayed, but normalization is not performed.
Normalization rescales disparate data ranges to a standard scale. Featurescaling insures the distances between data points are proportional andenables various optimization methods such as gradient descent to convergemuch faster. If normalization is performed, a MaxMin normalizer isused. It normalizes values in an interval [a, b] where -1 <= a <= 0and 0 <= b <= 1 and b - a = 1. This normalizer preservessparsity by mapping zero to zero.
ml_transforms
Specifies a list of MicrosoftML transforms to beperformed on the data before training or None if no transforms areto be performed. See featurize_text,categorical,and categorical_hash, for transformations that are supported.These transformations are performed after any specified Python transformations.The default value is None.
ml_transform_vars
Specifies a character vector of variable namesto be used in ml_transforms or None if none are to be used.The default value is None.
row_selection
NOT SUPPORTED. Specifies the rows (observations) from the data set that are to be used by the model with the name of a logical variable from the data set (in quotes) or with a logical expression using variables in the data set. For example:
row_selection = "old"will only use observations in which the value of the variableoldisTrue.
row_selection = (age > 20) & (age < 65) & (log(income) > 10)only uses observations in which the value of theagevariable is between 20 and 65 and the value of thelogof theincomevariable is greater than 10.
The row selection is performed after processing any datatransformations (see the arguments transforms ortransform_function). As with all expressions, row_selection can bedefined outside of the function call using the expressionfunction.
transforms
NOT SUPPORTED. An expression of the form that representsthe first round of variable transformations. As withall expressions, transforms (or row_selection) can be definedoutside of the function call using the expression function.
transform_objects
NOT SUPPORTED. A named list that contains objects that can bereferenced by transforms, transform_function, androw_selection.
transform_function
The variable transformation function.
transform_variables
A character vector of input data set variables needed for the transformation function.
transform_packages
NOT SUPPORTED. A character vector specifying additional Python packages(outside of those specified in RxOptions.get_option("transform_packages")) tobe made available and preloaded for use in variable transformation functions.For example, those explicitly defined in revoscalepy functions viatheir transforms and transform_function arguments or those definedimplicitly via their formula or row_selection arguments. Thetransform_packages argument may also be None, indicating thatno packages outside RxOptions.get_option("transform_packages") are preloaded.
transform_environment
NOT SUPPORTED. A user-defined environment to serve as a parent to allenvironments developed internally and used for variable data transformation.If transform_environment = None, a new “hash” environment with parentrevoscalepy.baseenv is used instead.
blocks_per_read
Specifies the number of blocks to read for each chunk of data read from the data source.
report_progress
An integer value that specifies the level of reporting on the row processing progress:
0: no progress is reported.
1: the number of processed rows is printed and updated.
2: rows processed and timings are reported.
3: rows processed and all timings are reported.
verbose
An integer value that specifies the amount of output wanted.If 0, no verbose output is printed during calculations. Integervalues from 1 to 4 provide increasing amounts of information.
compute_context
Sets the context in which computations are executed, specified with a valid revoscalepy.RxComputeContext. Currently local and revoscalepy.RxInSqlServer compute contexts are supported.
ensemble
Control parameters for ensembling.
Returns
A FastLinear object with the trained model.
Note
This algorithm is multi-threaded and will not attempt to load the entire dataset into memory.
See also
References
Example
'''
Binary Classification.
'''
import numpy
import pandas
from microsoftml import rx_fast_linear, rx_predict
from revoscalepy.etl.RxDataStep import rx_data_step
from microsoftml.datasets.datasets import get_dataset
infert = get_dataset("infert")
import sklearn
if sklearn.__version__ < "0.18":
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
infertdf = infert.as_df()
infertdf["isCase"] = infertdf.case == 1
data_train, data_test, y_train, y_test = train_test_split(infertdf, infertdf.isCase)
forest_model = rx_fast_linear(
formula=" isCase ~ age + parity + education + spontaneous + induced ",
data=data_train)
# RuntimeError: The type (RxTextData) for file is not supported.
score_ds = rx_predict(forest_model, data=data_test,
extra_vars_to_write=["isCase", "Score"])
# Print the first five rows
print(rx_data_step(score_ds, number_rows_read=5))
Output:
Automatically adding a MinMax normalization transform, use 'norm=Warn' or 'norm=No' to turn this behavior off.
Beginning processing data.
Rows Read: 186, Read Time: 0, Transform Time: 0
Beginning processing data.
Beginning processing data.
Rows Read: 186, Read Time: 0, Transform Time: 0
Beginning processing data.
Beginning processing data.
Rows Read: 186, Read Time: 0, Transform Time: 0
Beginning processing data.
Using 2 threads to train.
Automatically choosing a check frequency of 2.
Auto-tuning parameters: maxIterations = 8064.
Auto-tuning parameters: L2 = 2.666837E-05.
Auto-tuning parameters: L1Threshold (L1/L2) = 0.
Using best model from iteration 568.
Not training a calibrator because it is not needed.
Elapsed time: 00:00:00.5810985
Elapsed time: 00:00:00.0084876
Beginning processing data.
Rows Read: 62, Read Time: 0, Transform Time: 0
Beginning processing data.
Elapsed time: 00:00:00.0292334
Finished writing 62 rows.
Writing completed.
Rows Read: 5, Total Rows Processed: 5, Total Chunk Time: Less than .001 seconds
isCase PredictedLabel Score Probability
0 True True 0.990544 0.729195
1 False False -2.307120 0.090535
2 False False -0.608565 0.352387
3 True True 1.028217 0.736570
4 True False -3.913066 0.019588
Example
'''
Regression.
'''
import numpy
import pandas
from microsoftml import rx_fast_linear, rx_predict
from revoscalepy.etl.RxDataStep import rx_data_step
from microsoftml.datasets.datasets import get_dataset
attitude = get_dataset("attitude")
import sklearn
if sklearn.__version__ < "0.18":
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
attitudedf = attitude.as_df()
data_train, data_test = train_test_split(attitudedf)
model = rx_fast_linear(
formula="rating ~ complaints + privileges + learning + raises + critical + advance",
method="regression",
data=data_train)
# RuntimeError: The type (RxTextData) for file is not supported.
score_ds = rx_predict(model, data=data_test,
extra_vars_to_write=["rating"])
# Print the first five rows
print(rx_data_step(score_ds, number_rows_read=5))
Output:
Automatically adding a MinMax normalization transform, use 'norm=Warn' or 'norm=No' to turn this behavior off.
Beginning processing data.
Rows Read: 22, Read Time: 0.001, Transform Time: 0
Beginning processing data.
Beginning processing data.
Rows Read: 22, Read Time: 0.001, Transform Time: 0
Beginning processing data.
Beginning processing data.
Rows Read: 22, Read Time: 0, Transform Time: 0
Beginning processing data.
Using 2 threads to train.
Automatically choosing a check frequency of 2.
Auto-tuning parameters: maxIterations = 68180.
Auto-tuning parameters: L2 = 0.01.
Auto-tuning parameters: L1Threshold (L1/L2) = 0.
Using best model from iteration 54.
Not training a calibrator because it is not needed.
Elapsed time: 00:00:00.1114324
Elapsed time: 00:00:00.0090901
Beginning processing data.
Rows Read: 8, Read Time: 0, Transform Time: 0
Beginning processing data.
Elapsed time: 00:00:00.0330772
Finished writing 8 rows.
Writing completed.
Rows Read: 5, Total Rows Processed: 5, Total Chunk Time: Less than .001 seconds
rating Score
0 71.0 72.630440
1 67.0 56.995350
2 67.0 52.958641
3 72.0 80.894539
4 50.0 38.375427
|
Tengo un script Python con una función, la cual quiero invocar desde la consola Linux.
Pero hasta el momento solo puedo llamarla si lo ejecuto desde el mismo directorio pero no desde otro.
Comando Bash:
python -c 'import checkInternet; print checkInternet.internet_on()
Script Python
#!/usr/bin/python
import urllib2
import sys
import os
def internet_on():
try:
urllib2.urlopen('http://216.58.192.142', timeout=1)
return True
except urllib2.URLError as err:
return False
#os.system('/etc/init.d/network reload')
os.system('reset-mcu')
|
Hi guys
I have recently acquired a Tact Millennium amplifier. It's very old so the possibility of finding an original remote is slim.
Someone had uploaded a CCF file based on the remote to www.remotecentral.com.
I have utilised eventghost many times with a USB receiver, it's a great piece of software! I am wondering it would be possible handle said CCF file "blast" the codes from USB transmitter to the amp. If it works, I can then use the learning functioning on my universal remote to store them.
I am also looking into utilising the IR blaster on my Samsung S5 phone, but I am more familar with event ghost than Android.
Any advice or comments would be greatly appreciated!
Thanks
I have recently acquired a Tact Millennium amplifier. It's very old so the possibility of finding an original remote is slim.
Someone had uploaded a CCF file based on the remote to www.remotecentral.com.
I have utilised eventghost many times with a USB receiver, it's a great piece of software! I am wondering it would be possible handle said CCF file "blast" the codes from USB transmitter to the amp. If it works, I can then use the learning functioning on my universal remote to store them.
I am also looking into utilising the IR blaster on my Samsung S5 phone, but I am more familar with event ghost than Android.
Any advice or comments would be greatly appreciated!
Thanks
kgschlosser
Site Admin
Posts:5508
Joined:Fri Jun 05, 2015 5:43 am
Location:Rocky Mountains, Colorado USA
never mind about the ccf file
download and install this program
http://files.remotecentral.com/view/582 ... oedit.html
it is a pronto editor.
once you have the editor installed go ahead a run it. then open the ccf file
File --> Open Configuration
Once you have the file opened up it may ask you if you want to convert it. You can say ok, this is only happening because the file may be for a pronto remote that is different then one you selected when you first opened up the program.
you need to expand "Devices" in the left pane. then expand the device that is listed under it.
The items in []'s are GUI's that get displayed on the remote. each screen is going to have a bunch of different buttons for the device. so go ahead and double click on on of them.
You will see the various buttons. right click on one of the buttons and then click on properties. the Button Properties dialog will open up. The Action tab should be what you are seeing. on the left side there will be a vertical row of buttons and on the right there is going to be a white box.. in that white box there will one or more items in it. double click on one of the items. the Add IR dialog is going to open up. just above the OK button and alittle to the left there is a button "View IR" click on that button. you will now be able to see the pronto code for that button. that is what you will need to paste into the Transmit IR action in EG.
You are going to have to do this for each and every button on each of the available GUI screens. I know it is kind of a pain to do. but this is the only "free" way to do it. There is a program called Extract CCF that will spit out all of the pronto codes but I believe you have to pay for it.
download and install this program
http://files.remotecentral.com/view/582 ... oedit.html
it is a pronto editor.
once you have the editor installed go ahead a run it. then open the ccf file
File --> Open Configuration
Once you have the file opened up it may ask you if you want to convert it. You can say ok, this is only happening because the file may be for a pronto remote that is different then one you selected when you first opened up the program.
you need to expand "Devices" in the left pane. then expand the device that is listed under it.
The items in []'s are GUI's that get displayed on the remote. each screen is going to have a bunch of different buttons for the device. so go ahead and double click on on of them.
You will see the various buttons. right click on one of the buttons and then click on properties. the Button Properties dialog will open up. The Action tab should be what you are seeing. on the left side there will be a vertical row of buttons and on the right there is going to be a white box.. in that white box there will one or more items in it. double click on one of the items. the Add IR dialog is going to open up. just above the OK button and alittle to the left there is a button "View IR" click on that button. you will now be able to see the pronto code for that button. that is what you will need to paste into the Transmit IR action in EG.
You are going to have to do this for each and every button on each of the available GUI screens. I know it is kind of a pain to do. but this is the only "free" way to do it. There is a program called Extract CCF that will spit out all of the pronto codes but I believe you have to pay for it.
kgschlosser
Site Admin
Posts:5508
Joined:Fri Jun 05, 2015 5:43 am
Location:Rocky Mountains, Colorado USA
OK so scratch the above directions as that is going to be a royal pain
I have a better way.
create a folder on your desktop called "ccf_converter"
unzip the attached file into that folder. there is only a single dll in the file.
place any ccf files you want converted into that folder as well. It will bulk extract the pronto codes and button names.
create a new macro in EG add a Python Script action. Paste the code below into that script. Click on "Apply" and then click on "Test" it will dump all of the button names and pronto codes into the EG log.
I have a better way.
create a folder on your desktop called "ccf_converter"
unzip the attached file into that folder. there is only a single dll in the file.
place any ccf files you want converted into that folder as well. It will bulk extract the pronto codes and button names.
create a new macro in EG add a Python Script action. Paste the code below into that script. Click on "Apply" and then click on "Test" it will dump all of the button names and pronto codes into the EG log.
Code: Select all
import ctypes
from ctypes.wintypes import INT
import os
import shutil
CCF_DUMP_CODES = 0x00000010
path = os.path.join(os.path.expanduser('~'), 'desktop', 'ccf_converter')
hCCFDll = ctypes.cdll.LoadLibrary(os.path.join(path, 'CCFDll.dll'))
CCFRunDumper = hCCFDll.CCFRunDumper
CCFRunDumper.restype = ctypes.POINTER(INT)
for ccf_file in os.listdir(path):
if not ccf_file.endswith('.ccf'):
continue
print ccf_file
szInputCCF = ctypes.create_string_buffer(os.path.join(path, ccf_file))
szOutputDirectory = ctypes.create_string_buffer(path)
DumpFlags = INT(CCF_DUMP_CODES)
CCFRunDumper(szInputCCF, szOutputDirectory, DumpFlags)
for code_file in os.listdir(os.path.join(path, 'codes')):
code_file = os.path.join(os.path.join(path, 'codes', code_file))
with open(code_file, 'r') as f:
data = f.read()
data = data.split('</tr>')[2:-2]
code = [line.strip() for line in data[0].split('\n') if line.strip()][-1]
code = code.split('","')[-1].split('")')[0]
name = [line.strip() for line in data[-1].split('\n') if line.strip()][-2]
name = name.split('">')[-1].split('</')[0]
print name, ':', code
print
print
shutil.rmtree(os.path.join(path, 'codes'))
Attachments
CCFDll.zip
(393.88 KiB) Downloaded 63 times
I've managed to open the file using CCF extractor. However it was made in 2002 and is non standard somehow, so won't convert automatically. However CCF extractor could still view the codes, which I then copy and pasted into iRPlus online xml convertor...
https://irplus-remote.github.io/converter/rcentral.html
Said file loaded into iRplus android app OK, utilising my S5's iR blaster it works The Robman over at JP-1 remote has kindly cleaned up the codes so that they work better and it looks like the problem is solved. I've piggy backed the buttons onto a couple of One 4 All learning remotes for safe keeping.
Thanks for the informative posts guys, very helpful!
https://irplus-remote.github.io/converter/rcentral.html
Said file loaded into iRplus android app OK, utilising my S5's iR blaster it works The Robman over at JP-1 remote has kindly cleaned up the codes so that they work better and it looks like the problem is solved. I've piggy backed the buttons onto a couple of One 4 All learning remotes for safe keeping.
Thanks for the informative posts guys, very helpful!
kgschlosser
Site Admin
Posts:5508
Joined:Fri Jun 05, 2015 5:43 am
Location:Rocky Mountains, Colorado USA
those one4all remotes are great aren't they?. I have a bunch of them. they are well constructed solid remotes and you can hack them.. after all if you can't hack it you don't own it
|
Hello!
I used to draw a rectangle of a certain color and thickness in PDF Viewer using annotations (with the JavaScript function addAnnot()).
Could I simply draw a rectangle with any function of the PDF-Tools Library or should I also create an Annotation with the PXCp_Add3DAnnotationW() function? The problem is I'm trying to use only the PDF-Tools in order to manipulate a PDF-Document.
Thanks for any answers!
I used to draw a rectangle of a certain color and thickness in PDF Viewer using annotations (with the JavaScript function addAnnot()).
Could I simply draw a rectangle with any function of the PDF-Tools Library or should I also create an Annotation with the PXCp_Add3DAnnotationW() function? The problem is I'm trying to use only the PDF-Tools in order to manipulate a PDF-Document.
Thanks for any answers!
Hi!
I have some questions on the PXCp_AddLineAnnotationW function:
1. The last parameter of the function is a pointer to a PXC_CommonAnnotInfo structure. It expects among other things an integer value for a color. Here is an extract from the docs with a pseudo code: I couldn't find any equivalent of that function in WPF. How is that value calculated?
This is an extract from my code:I didn't define the values for AnnotInfo.m_Border.m_DashArray. No annotation is created. I tested it with JavaScript command this.getAnnots(0); It returns null.
Thanks!
I have some questions on the PXCp_AddLineAnnotationW function:
1. The last parameter of the function is a pointer to a PXC_CommonAnnotInfo structure. It expects among other things an integer value for a color. Here is an extract from the docs with a pseudo code:
Code: Select all
AnnotInfo.m_Color = RGB(200, 0, 100);
2. A comprehension question: should I draw four single lines in order to create a rectangle or can I directly draw a rectangle with that function (with the parameter LPCPXC_RectF rect that specifies the bounding rectangle of the annotation)?RGB(255, 255, 255) = 16777215 ???
This is an extract from my code:
Code: Select all
var borderRect = new PdfXchangePro.PXC_RectF { left = selection.Left,
right = selection.Right,
top = selection.Top,
bottom = selection.Bottom };
int color = 16777215; // RGB(255, 255, 255) ???
var border = new PdfXchangePro.PXC_AnnotBorder { m_Width = StrToDouble(BorderThickness),
m_Type = PdfXchangePro.PXC_AnnotBorderStyle.ABS_Solid };
var borderInfo = new PdfXchangePro.PXC_CommonAnnotInfo{ m_Color = color,
m_Flags = Convert.ToInt32(PdfXchangePro.PXC_AnnotsFlags.AF_ReadOnly),
m_Opacity = _opacity,
m_Border = border };
var startPoint = new PdfXchangePro.PXC_PointF {x = selection.Left, y = selection.Top};
var endPoint = new PdfXchangePro.PXC_PointF {x = selection.Right, y = selection.Bottom};
int retval = PdfXchangePro.PXCp_AddLineAnnotationW(_handle,
0,
ref borderRect,
"xy",
"yx",
ref startPoint,
ref endPoint,
PdfXchangePro.PXC_LineAnnotsType.LAType_None,
PdfXchangePro.PXC_LineAnnotsType.LAType_None,
color,
ref borderInfo); // function returns 0
Thanks!
Site Admin
Posts:3632
Joined:Thu Jul 08, 2004 10:36 pm
Location:Vancouver Island - Canada
Contact:
Can you send me PDF generated by your code ?
P.S. RGB is 'macro' is equivalent to the following function
P.S. RGB is 'macro' is equivalent to the following function
Code: Select all
// r, g, and b in range from 0 to 255
ULONG _RGB(int r, int g, int b)
{
return (ULONG)(r + g * 256 + b * 65536);
}
Tracker Software (Project Director)
When attaching files to any message - please ensure they are archived and posted as a .ZIP, .RAR or .7z format - or they will not be posted - thanks.
When attaching files to any message - please ensure they are archived and posted as a .ZIP, .RAR or .7z format - or they will not be posted - thanks.
I've got it! I had to close the document in the PDF viewer before creating a line annotation with PDF Tools Library function PXCp_AddLineAnnotationW!
I still don't understand whether I can create a rectangle as one annotation or I have to construct it generating 4 single lines? The third parameter (LPCPXC_RectF rect) in this function is according to the documentation a bounding rectangle of the
annotation. What is the use of it?
One more question. Is it possible to suppress the pop-up annotation dialog that appears after a double-click on the annotation? Are there any parameter in the PXCp_AddLineAnnotationW function to manage it. I've only found the flag PXC_AnnotsFlags.AF_ReadOnly of the PXC_CommonAnnotInfo class, that makes the line annotation (or the annotation bounding rectangle?) read-only.
I still don't understand whether I can create a rectangle as one annotation or I have to construct it generating 4 single lines? The third parameter (LPCPXC_RectF rect) in this function is according to the documentation a bounding rectangle of the
annotation. What is the use of it?
One more question. Is it possible to suppress the pop-up annotation dialog that appears after a double-click on the annotation? Are there any parameter in the PXCp_AddLineAnnotationW function to manage it. I've only found the flag PXC_AnnotsFlags.AF_ReadOnly of the PXC_CommonAnnotInfo class, that makes the line annotation (or the annotation bounding rectangle?) read-only.
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
It is definitely possible with the low level functions, but unfortunately the only High Level ones are for adding annotations - not for deleting them.
Best,
Stefan
It is definitely possible with the low level functions, but unfortunately the only High Level ones are for adding annotations - not for deleting them.
Best,
Stefan
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hello relapse,
As mentioned before - there are no high level functions that will allow you to delete annotations.
So you will need to read on annotations in the PDF Reference:
http://wwwimages.adobe.com/www.adobe.co ... ce_1-7.pdf
section 8.4 Annotations in the above document
or section 12.4 Annotations in the ISO version of the file:
http://wwwimages.adobe.com/www.adobe.co ... 0_2008.pdf
And then utilize the low level functions described in
Alternatively - you could use JS while you have the files opened in the Viewer AX. This should be quite a lot easier to implement, and will still allow you to create/edit/delete annotations as needed.
Best,
Stefan
Tracker
As mentioned before - there are no high level functions that will allow you to delete annotations.
So you will need to read on annotations in the PDF Reference:
http://wwwimages.adobe.com/www.adobe.co ... ce_1-7.pdf
section 8.4 Annotations in the above document
or section 12.4 Annotations in the ISO version of the file:
http://wwwimages.adobe.com/www.adobe.co ... 0_2008.pdf
And then utilize the low level functions described in
3.2.5 PDF Dictionary Functionsof our PDF Tools SDK manual to read and manipulate the annotations dictionary as neeed.
Alternatively - you could use JS while you have the files opened in the Viewer AX. This should be quite a lot easier to implement, and will still allow you to create/edit/delete annotations as needed.
Best,
Stefan
Tracker
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
There are some snippets inside the manual, but there isn't anything more complex - as those are low level functions giving you access to the very structure of the PDF File and the way you would like to use such methods will greatly vary from case to case. You will need to get yourself acquainted with the PDF specification to be able to use those successfully.
Best,
Stefan
There are some snippets inside the manual, but there isn't anything more complex - as those are low level functions giving you access to the very structure of the PDF File and the way you would like to use such methods will greatly vary from case to case. You will need to get yourself acquainted with the PDF specification to be able to use those successfully.
Best,
Stefan
I do read the pdf specification
I cannot understand how it is possible to access the Annotations-dictionary of a ceratin page.
I've found the function PXCp_ObjectGetDictionary, but it needs an object handle. Where can I get it?
I cannot understand how it is possible to access the Annotations-dictionary of a ceratin page.
I've found the function PXCp_ObjectGetDictionary, but it needs an object handle. Where can I get it?
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi relapse,
You might want to use functions like PXCp_llGetObjectByIndex to obtain an object first, and then here is the sample from the manual for using the PXCp_ObjectGetDictionary function:Best,
Stefan
You might want to use functions like PXCp_llGetObjectByIndex to obtain an object first, and then here is the sample from the manual for using the PXCp_ObjectGetDictionary function:
Code: Select all
// Retrieve object's dictionary
HPDFOBJECT hObject;
...
HPDFDICTIONARY hDict;
hr = PXCp_ObjectGetDictionary(hObject, &hDict);
if (IS_DS_FAILED(hr))
{
// report error
...
}
Stefan
I try to use the PXC_Rect function in order to draw a real rectangle and not an annotation.
What is this identifier for the page content and how can I get it?
Thanks!
HRESULT PXC_Rect(
_PXCContent* content,
double left,
double top,
double right,
double bottom
);
Parameters
content [in] Parameter content specifies the identifier for the page content to which the function will be applied.
_PXCContent* content,
double left,
double top,
double right,
double bottom
);
Parameters
content [in] Parameter content specifies the identifier for the page content to which the function will be applied.
What is this identifier for the page content and how can I get it?
Thanks!
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
This method is from the PXCLIB40 set of functions - those are aimed at creating new PDF document from scratch - so you can not use that to just add a rectangle to an already existing page I am afraid.
Otherwise - you can see how the content identifier is to be set up in the sample projects in
C:\Program Files\Tracker Software\PDF-XChange PRO 4 SDK\Examples\SDKExamples\<<YOUR Programming language>>\PDFXCDemo
Best,
Stefan
This method is from the PXCLIB40 set of functions - those are aimed at creating new PDF document from scratch - so you can not use that to just add a rectangle to an already existing page I am afraid.
Otherwise - you can see how the content identifier is to be set up in the sample projects in
C:\Program Files\Tracker Software\PDF-XChange PRO 4 SDK\Examples\SDKExamples\<<YOUR Programming language>>\PDFXCDemo
Best,
Stefan
Thanks, Stefan, your patience is honorable.
Is there any difference between
HRESULT PXCp_Init(PDFDocument* pObject, LPCSTR Key, LPCSTR DevCode);
and
HRESULT PXC_NewDocument(_PXCDocument** pdf, LPCSTR key, LPCSTR devCode);
? Are the both parameters
I've tried to mix the use of the both libraries:but I've got an AccessViolationException executing the PXC_GetPage function.
I've also found no function to delete a new created (with PXC_Rect) graphical object or is it not possible at all?
Is there any difference between
HRESULT PXCp_Init(PDFDocument* pObject, LPCSTR Key, LPCSTR DevCode);
and
HRESULT PXC_NewDocument(_PXCDocument** pdf, LPCSTR key, LPCSTR devCode);
? Are the both parameters
andPDFDocument* pObjectidentical?_PXCDocument** pdf
I've tried to mix the use of the both libraries:
Code: Select all
int pageContentIdentifier;
int pdfHandle;
int pdfPage = 0;
PdfXchangePro.PXCp_Init(out pdfHandle, PdfXchangePro.SerialNumber, PdfXchangePro.DevelopmentCode);
PdfXchangePro.PXCp_ReadDocumentW(pdfHandle, _tempFile, 0);
PdfXchange.PXC_GetPage(pdfHandle, pdfPage, out pageContentIdentifier);
PdfXchange.PXC_Rect(pdfHandle, 20, 100, 100, 20);
I've also found no function to delete a new created (with PXC_Rect) graphical object or is it not possible at all?
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
I am afraid you can't mix methods from the two libraries. You will need to create and save a PDF file using the PXC_ methods, and then open it and modify it using the PCXp_ones.
As PCX_ methods are designed for building up PDF files - there are no delete methods - as you are creating a pdf file or page starting from an empty one and only adding the components you want.
Best,
Stefan
I am afraid you can't mix methods from the two libraries. You will need to create and save a PDF file using the PXC_ methods, and then open it and modify it using the PCXp_ones.
As PCX_ methods are designed for building up PDF files - there are no delete methods - as you are creating a pdf file or page starting from an empty one and only adding the components you want.
Best,
Stefan
Yesterday I managed to draw a rectangle I needed but the restriction is - it must be a new pdf document. It's a pity!
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hello Relapse,
Glad to hear that you got it working. And great to hear there are no width limitations with the PXCp_AddLineAnnotationW method.
As for samples for handling dictionaries - I am afraid that I can't help - any any samples would probably be in the PDF Specification itself.
Best,
Stefan
Glad to hear that you got it working. And great to hear there are no width limitations with the PXCp_AddLineAnnotationW method.
As for samples for handling dictionaries - I am afraid that I can't help - any any samples would probably be in the PDF Specification itself.
Best,
Stefan
The best advice here is to look at the C# wrappers for other projects. It is important to use the proper marshalling for types like BSTR and LPWSTR (from C# "string" types). If you look at function declarations for DLL imports in C# you'll often see a function argument prefixed by something like:relapse wrote:Yesterday I managed to draw a rectangle I needed but the restriction is - it must be a new pdf document. It's a pity!
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Code: Select all
[MarshalAs(UnmanagedType.LPWStr)]
Code: Select all
sometype somefunction([MarshalAs(UnmanagedType.LPWStr)] string InputLPWSTR);
UnmanagedType has a lot of members (LPWStr, BStr, etc) that you can specify for different scenarios. Check MSDN for details or use autocomplete in Visual Studio to see a list.
Also note the use of "ref" and "out" keywords that are used when the API function takes a pointer. "ref" means C# will check to see if the value is initialized; "out" means it may be uninitialized and is expected to be set by the function.
Code: Select all
E.g. C++:
HRESULT calculate_property_of_mystruct(mystruct* input, int* output);
would be imported into C# with:
... calculate_property_of_mystruct(ref mystruct input, out int output);
Lots of reading here:
http://msdn.microsoft.com/en-us/library/26thfadc.aspx
http://msdn.microsoft.com/en-us/library/fzhhdwae.aspx
|
UNSOLVED Contextual menu â enable/disable entries?
ninalast edited by gferreira
Hiii all
I have a question about contextual menus in the Font Overview, as documented here. (so useful!! thanks)
Was wondering, is there any way to dynamically/contextually toggle either the visibility or the âenabledâ state of the menu entries? Iâm looking at a situation where not all menu entries are necessarily relevant or available depending on which glyph is selected, so being able to react to that would be super useful.
Cheers
Nina
you can also provide a
dictas menu item during the notification:
myMenuItems = [
dict(title="option 1", callback =self.option1Callback, enabled=False),
("option 2", self.option2Callback),
("submenu", [("option 3", self.option3Callback)])
]
hope this helps!!
ninalast edited by nina
Ah cool, thanks Frederik.
Hmm, I just tried this and setting an item to enabled=False like this does not directly have an impact â that item still looks like the others, and reacts to click too. Am I doing it wrong? (Looking at 3.3 build 1911061105)
PS. What does work is just composing the myMenuItems list on the fly so the irrelevant options donât show up at all, so thatâs good in the meantime. Having greyed-out/disabled options might be a useful addition if possible. Thanks!
oh, idd not working as it should be... will be fixed in next beta/release
thanks
|
blob: 68956672a94f873ad654989b1d34d88d0c809c60 (
plain
)
# Copyright (C) 2006, Red Hat, Inc.
# Copyright (C) 2007, One Laptop Per Child
# Copyright (C) 2009, Tomeu Vizoso, Simon Schampijer
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
import os
import time
import re
from gettext import gettext as _
from gi.repository import GObject
from gi.repository import Gtk
from gi.repository import Gdk
from gi.repository import Pango
from gi.repository import WebKit
from gi.repository import Soup
from sugar3 import env
from sugar3.activity import activity
from sugar3.graphics import style
from sugar3.graphics.icon import Icon
from widgets import BrowserNotebook
import globalhistory
import downloadmanager
from pdfviewer import PDFTabPage
_ZOOM_AMOUNT = 0.1
_LIBRARY_PATH = '/usr/share/library-common/index.html'
_WEB_SCHEMES = ['http', 'https', 'ftp', 'file', 'javascript', 'data',
'about', 'gopher', 'mailto']
_NON_SEARCH_REGEX = re.compile('''
(^localhost(\\.[^\s]+)?(:\\d+)?(/.*)?$|
^[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]$|
^::[0-9a-f:]*$| # IPv6 literals
^[0-9a-f:]+:[0-9a-f:]*$| # IPv6 literals
^[^\\.\s]+\\.[^\\.\s]+.*$| # foo.bar...
^https?://[^/\\.\s]+.*$|
^about:.*$|
^data:.*$|
^file:.*$)
''', re.VERBOSE)
class CommandListener(object):
def __init__(self, window):
self._window = window
def handleEvent(self, event):
if not event.isTrusted:
return
uri = event.originalTarget.ownerDocument.documentURI
if not uri.startswith('about:neterror?e=nssBadCert'):
return
cls = components.classes['@sugarlabs.org/add-cert-exception;1']
cert_exception = cls.createInstance(interfaces.hulahopAddCertException)
cert_exception.showDialog(self._window)
class TabbedView(BrowserNotebook):
__gtype_name__ = 'TabbedView'
__gsignals__ = {
'focus-url-entry': (GObject.SignalFlags.RUN_FIRST,
None,
([])),
}
def __init__(self):
BrowserNotebook.__init__(self)
self.props.show_border = False
self.props.scrollable = True
self.connect('size-allocate', self.__size_allocate_cb)
self.connect('page-added', self.__page_added_cb)
self.connect('page-removed', self.__page_removed_cb)
self.add_tab()
self._update_closing_buttons()
self._update_tab_sizes()
def normalize_or_autosearch_url(self, url):
"""Normalize the url input or return a url for search.
We use SoupURI as an indication of whether the value given in url
is not something we want to search; we only do that, though, if
the address has a web scheme, because SoupURI will consider any
string: as a valid scheme, and we will end up prepending http://
to it.
This code is borrowed from Epiphany.
url -- input string that can be normalized to an url or serve
as search
Return: a string containing a valid url
"""
def has_web_scheme(address):
if address == '':
return False
scheme, sep, after = address.partition(':')
if sep == '':
return False
return scheme in _WEB_SCHEMES
soup_uri = None
effective_url = None
if has_web_scheme(url):
try:
soup_uri = Soup.URI.new(url)
except TypeError:
pass
if soup_uri is None and not _NON_SEARCH_REGEX.match(url):
# Get the user's LANG to use as default language of
# the results
locale = os.environ.get('LANG', '')
language_location = locale.split('.', 1)[0].lower()
language = language_location.split('_')[0]
# If the string doesn't look like an URI, let's search it:
url_search = 'http://www.google.com/search?' \
'q=%(query)s&ie=UTF-8&oe=UTF-8&hl=%(language)s'
query_param = Soup.form_encode_hash({'q': url})
# [2:] here is getting rid of 'q=':
effective_url = url_search % {'query': query_param[2:],
'language': language}
else:
if has_web_scheme(url):
effective_url = url
else:
effective_url = 'http://' + url
return effective_url
def __size_allocate_cb(self, widget, allocation):
self._update_tab_sizes()
def __page_added_cb(self, notebook, child, pagenum):
self._update_closing_buttons()
self._update_tab_sizes()
def __page_removed_cb(self, notebook, child, pagenum):
if self.get_n_pages():
self._update_closing_buttons()
self._update_tab_sizes()
def __new_tab_cb(self, browser, url):
new_browser = self.add_tab(next_to_current=True)
new_browser.load_uri(url)
new_browser.grab_focus()
def __open_pdf_in_new_tab_cb(self, browser, url):
tab_page = PDFTabPage()
tab_page.browser.connect('new-tab', self.__new_tab_cb)
tab_page.browser.connect('tab-close', self.__tab_close_cb)
label = TabLabel(tab_page.browser)
label.connect('tab-close', self.__tab_close_cb, tab_page)
next_index = self.get_current_page() + 1
self.insert_page(tab_page, label, next_index)
tab_page.show()
label.show()
self.set_current_page(next_index)
tab_page.setup(url)
def add_tab(self, next_to_current=False):
browser = Browser()
browser.connect('new-tab', self.__new_tab_cb)
browser.connect('open-pdf', self.__open_pdf_in_new_tab_cb)
if next_to_current:
self._insert_tab_next(browser)
else:
self._append_tab(browser)
self.emit('focus-url-entry')
return browser
def _insert_tab_next(self, browser):
tab_page = TabPage(browser)
label = TabLabel(browser)
label.connect('tab-close', self.__tab_close_cb, tab_page)
next_index = self.get_current_page() + 1
self.insert_page(tab_page, label, next_index)
tab_page.show()
self.set_current_page(next_index)
def _append_tab(self, browser):
tab_page = TabPage(browser)
label = TabLabel(browser)
label.connect('tab-close', self.__tab_close_cb, tab_page)
self.append_page(tab_page, label)
tab_page.show()
self.set_current_page(-1)
def on_add_tab(self, gobject):
self.add_tab()
def __tab_close_cb(self, label, tab_page):
self.remove_page(self.page_num(tab_page))
tab_page.destroy()
def _update_tab_sizes(self):
"""Update ta widths based in the amount of tabs."""
n_pages = self.get_n_pages()
canvas_size = self.get_allocation()
# FIXME
# overlap_size = self.style_get_property('tab-overlap') * n_pages - 1
overlap_size = 0
allowed_size = canvas_size.width - overlap_size
tab_new_size = int(allowed_size * 1.0 / (n_pages + 1))
# Four tabs ensured:
tab_max_size = int(allowed_size * 1.0 / (5))
# Eight tabs ensured:
tab_min_size = int(allowed_size * 1.0 / (9))
if tab_new_size < tab_min_size:
tab_new_size = tab_min_size
elif tab_new_size > tab_max_size:
tab_new_size = tab_max_size
for page_idx in range(n_pages):
page = self.get_nth_page(page_idx)
label = self.get_tab_label(page)
label.update_size(tab_new_size)
def _update_closing_buttons(self):
"""Prevent closing the last tab."""
first_page = self.get_nth_page(0)
first_label = self.get_tab_label(first_page)
if self.get_n_pages() == 1:
first_label.hide_close_button()
else:
first_label.show_close_button()
def load_homepage(self):
browser = self.current_browser
if os.path.isfile(_LIBRARY_PATH):
browser.load_uri('file://' + _LIBRARY_PATH)
else:
default_page = os.path.join(activity.get_bundle_path(),
"data/index.html")
browser.load_uri('file://' + default_page)
def _get_current_browser(self):
if self.get_n_pages():
return self.get_nth_page(self.get_current_page()).browser
else:
return None
current_browser = GObject.property(type=object,
getter=_get_current_browser)
def get_history(self):
tab_histories = []
for index in xrange(0, self.get_n_pages()):
tab_page = self.get_nth_page(index)
tab_histories.append(tab_page.browser.get_history())
return tab_histories
def set_history(self, tab_histories):
if tab_histories and isinstance(tab_histories[0], dict):
# Old format, no tabs
tab_histories = [tab_histories]
while self.get_n_pages():
self.remove_page(self.get_n_pages() - 1)
def is_pdf_history(tab_history):
return (len(tab_history) == 1 and
tab_history[0]['url'].lower().endswith('pdf'))
for tab_history in tab_histories:
if is_pdf_history(tab_history):
url = tab_history[0]['url']
tab_page = PDFTabPage()
tab_page.browser.connect('new-tab', self.__new_tab_cb)
tab_page.browser.connect('tab-close', self.__tab_close_cb)
label = TabLabel(tab_page.browser)
label.connect('tab-close', self.__tab_close_cb, tab_page)
self.append_page(tab_page, label)
tab_page.show()
label.show()
tab_page.setup(url, title=tab_history[0]['title'])
else:
browser = Browser()
browser.connect('new-tab', self.__new_tab_cb)
browser.connect('open-pdf', self.__open_pdf_in_new_tab_cb)
self._append_tab(browser)
browser.set_history(tab_history)
Gtk.rc_parse_string('''
style "browse-tab-close" {
xthickness = 0
ythickness = 0
}
widget "*browse-tab-close" style "browse-tab-close"''')
class TabPage(Gtk.ScrolledWindow):
__gtype_name__ = 'BrowseTabPage'
def __init__(self, browser):
GObject.GObject.__init__(self)
self._browser = browser
self.add(browser)
browser.show()
def _get_browser(self):
return self._browser
browser = GObject.property(type=object,
getter=_get_browser)
class TabLabel(Gtk.HBox):
__gtype_name__ = 'BrowseTabLabel'
__gsignals__ = {
'tab-close': (GObject.SignalFlags.RUN_FIRST,
None,
([])),
}
def __init__(self, browser):
GObject.GObject.__init__(self)
browser.connect('notify::title', self.__title_changed_cb)
browser.connect('notify::load-status', self.__load_status_changed_cb)
self._title = _('Untitled')
self._label = Gtk.Label(label=self._title)
self._label.set_ellipsize(Pango.EllipsizeMode.END)
self._label.set_alignment(0, 0.5)
self.pack_start(self._label, True, True, 0)
self._label.show()
close_tab_icon = Icon(icon_name='browse-close-tab')
button = Gtk.Button()
button.props.relief = Gtk.ReliefStyle.NONE
button.props.focus_on_click = False
icon_box = Gtk.HBox()
icon_box.pack_start(close_tab_icon, True, False, 0)
button.add(icon_box)
button.connect('clicked', self.__button_clicked_cb)
button.set_name('browse-tab-close')
self.pack_start(button, False, True, 0)
close_tab_icon.show()
icon_box.show()
button.show()
self._close_button = button
def update_size(self, size):
self.set_size_request(size, -1)
def hide_close_button(self):
self._close_button.hide()
def show_close_button(self):
self._close_button.show()
def __button_clicked_cb(self, button):
self.emit('tab-close')
def __title_changed_cb(self, widget, param):
if widget.props.title:
self._label.set_text(widget.props.title)
self._title = widget.props.title
def __load_status_changed_cb(self, widget, param):
status = widget.get_load_status()
if status == WebKit.LoadStatus.FAILED:
self._label.set_text(self._title)
elif WebKit.LoadStatus.PROVISIONAL <= status \
< WebKit.LoadStatus.FINISHED:
self._label.set_text(_('Loading...'))
elif status == WebKit.LoadStatus.FINISHED:
if widget.props.title == None:
self._label.set_text(_('Untitled'))
self._title = _('Untitled')
class Browser(WebKit.WebView):
__gtype_name__ = 'Browser'
__gsignals__ = {
'new-tab': (GObject.SignalFlags.RUN_FIRST,
None,
([str])),
'open-pdf': (GObject.SignalFlags.RUN_FIRST,
None,
([str])),
}
CURRENT_SUGAR_VERSION = '0.96'
def __init__(self):
WebKit.WebView.__init__(self)
web_settings = self.get_settings()
# Add SugarLabs user agent:
identifier = ' Sugar Labs/' + self.CURRENT_SUGAR_VERSION
web_settings.props.user_agent += identifier
# Change font size based in the GtkSettings font size. The
# gtk-font-name property is a string with format '[font name]
# [font size]' like 'Sans Serif 10'.
gtk_settings = Gtk.Settings.get_default()
gtk_font_name = gtk_settings.get_property('gtk-font-name')
gtk_font_size = float(gtk_font_name.split()[-1])
web_settings.props.default_font_size = gtk_font_size * 1.2
web_settings.props.default_monospace_font_size = \
gtk_font_size * 1.2 - 2
self.set_settings(web_settings)
# Scale text and graphics:
self.set_full_content_zoom(True)
# Reference to the global history and callbacks to handle it:
self._global_history = globalhistory.get_global_history()
self.connect('notify::load-status', self.__load_status_changed_cb)
self.connect('notify::title', self.__title_changed_cb)
self.connect('download-requested', self.__download_requested_cb)
self.connect('mime-type-policy-decision-requested',
self.__mime_type_policy_cb)
self.connect('new-window-policy-decision-requested',
self.__new_window_policy_cb)
def get_history(self):
"""Return the browsing history of this browser."""
back_forward_list = self.get_back_forward_list()
items_list = self._items_history_as_list(back_forward_list)
# If this is an empty tab, return an empty history:
if len(items_list) == 1 and items_list[0] is None:
return []
history = []
for item in items_list:
history.append({'url': item.get_uri(),
'title': item.get_title()})
return history
def set_history(self, history):
"""Restore the browsing history for this browser."""
back_forward_list = self.get_back_forward_list()
back_forward_list.clear()
for entry in history:
uri, title = entry['url'], entry['title']
history_item = WebKit.WebHistoryItem.new_with_data(uri, title)
back_forward_list.add_item(history_item)
def get_history_index(self):
"""Return the index of the current item in the history."""
back_forward_list = self.get_back_forward_list()
history_list = self._items_history_as_list(back_forward_list)
current_item = back_forward_list.get_current_item()
return history_list.index(current_item)
def set_history_index(self, index):
"""Go to the item in the history specified by the index."""
back_forward_list = self.get_back_forward_list()
current_item = index - back_forward_list.get_back_length()
item = back_forward_list.get_nth_item(current_item)
if item is not None:
self.go_to_back_forward_item(item)
def _items_history_as_list(self, history):
"""Return a list with the items of a WebKit.WebBackForwardList."""
back_items = []
for n in reversed(range(1, history.get_back_length() + 1)):
item = history.get_nth_item(n * -1)
back_items.append(item)
current_item = [history.get_current_item()]
forward_items = []
for n in range(1, history.get_forward_length() + 1):
item = history.get_nth_item(n)
forward_items.append(item)
all_items = back_items + current_item + forward_items
return all_items
def get_source(self, async_cb, async_err_cb):
data_source = self.get_main_frame().get_data_source()
data = data_source.get_data()
if data_source.is_loading() or data is None:
async_err_cb()
temp_path = os.path.join(activity.get_activity_root(), 'instance')
file_path = os.path.join(temp_path, '%i' % time.time())
file_handle = file(file_path, 'w')
file_handle.write(data.str)
file_handle.close()
async_cb(file_path)
def open_new_tab(self, url):
self.emit('new-tab', url)
def __load_status_changed_cb(self, widget, param):
"""Add the url to the global history or update it."""
status = widget.get_load_status()
if status <= WebKit.LoadStatus.COMMITTED:
uri = self.get_uri()
self._global_history.add_page(uri)
def __title_changed_cb(self, widget, param):
"""Update title in global history."""
uri = self.get_uri()
if self.props.title is not None:
title = self.props.title
if not isinstance(title, unicode):
title = unicode(title, 'utf-8')
self._global_history.set_page_title(uri, title)
def __mime_type_policy_cb(self, webview, frame, request, mimetype,
policy_decision):
"""Handle downloads and PDF files."""
if mimetype == 'application/pdf':
self.emit('open-pdf', request.get_uri())
policy_decision.ignore()
return True
elif not self.can_show_mime_type(mimetype):
policy_decision.download()
return True
return False
def __new_window_policy_cb(self, webview, webframe, request,
navigation_action, policy_decision):
"""Open new tab instead of a new window.
Browse doesn't support many windows, as any Sugar activity.
So we will handle the request, ignoring it and returning True
to inform WebKit that a decision was made. And we will open a
new tab instead.
"""
policy_decision.ignore()
uri = request.get_uri()
self.open_new_tab(uri)
return True
def __download_requested_cb(self, browser, download):
downloadmanager.add_download(download, browser)
return True
class PopupDialog(Gtk.Window):
def __init__(self):
GObject.GObject.__init__(self)
self.set_type_hint(Gdk.WindowTypeHint.DIALOG)
border = style.GRID_CELL_SIZE
self.set_default_size(Gdk.Screen.width() - border * 2,
Gdk.Screen.height() - border * 2)
self.view = WebKit.WebView()
self.view.connect('notify::visibility', self.__notify_visibility_cb)
self.add(self.view)
self.view.realize()
def __notify_visibility_cb(self, web_view, pspec):
if self.view.props.visibility:
self.view.show()
self.show()
|
アプリケーションの作成
アプリケーションを作成する場合は以下のコマンドで実行します。
python manage.py startapp polls #pollsは任意のアプリケーション名
ここのコマンドで新たにディレクトリが作成されるのでここにメインソースを格納していくことになります。
Djangoのディレクトリの構成として機能(アプリケーション)ごとにディレクトリを作成していくことになります。
viewの作成
DjangoはCMVモデルの考え方を採用していますが一般的なMVCと対応する内容が多少異なりますので注意が必要です。
まず、DjangoではMVCでは無くMVTとよく呼ばれます。
MVTとは
MはModelの略でデータ操作を実施するプログラム
VはViewの略でビジネスモデルを記載するプログラムです。
Tは Templateの略でHTML等の画面部分を構成するプログラムです。(他のMVCで言うところのV(View)に該当します。)
ここで注意が必要なのが他のフレームワーク等でのMVCで言うところのC(コントローラ)はDjangoではT(Template),CはV(View)に該当します。
#polls/views.py
from django.http import HttpResponse
def index(request):
return HttpResponse("Hello, world. You're at the polls index.")
次にpollsフォルダにurls.pyを作成します。
#polls/urls.py
from django.urls import path
from . import views
urlpatterns = [
path('', views.index, name='index'),
]
次に先ほど、ルート(mysite)のurs.pyにinclideモジュールを追加して、先ほど作成したposlls/url.pyを読み込ませる設定をします。
#mysite/urls.py
from django.contrib import admin
from django.urls import include, path
urlpatterns = [
path('polls/', include('polls.urls')),
path('admin/', admin.site.urls),
]
ルートのurls.pyではアプリケーションへ到達させるためのurl設定を記載し、
アプリケーション内のurls.pyにはそのアプリケーション内のurl設定を記載します。
実際にサーバを起動して動作を確認してみましょう
python manage.py runserver 8080
サーバを起動させ、http://localhost:8080/polls/にアクセスすると “Hello, world. You’re at the polls index.” と出力されるのが確認できると思います。
DjangoのURLのマッチの探し方は以下の通りです。
mysiteのurl.pyではpath(‘polls/’, include(‘polls.urls’))で<ルートディレクトリ>/pollsで
polls/urls.pyのURLを参照して<ルートディレクトリ>/pollsから
先のURLをpolls/urls.pyから参照します。
|
Overview
This will be the third article in a four-part series covering thefollowing:
Dataset analysis - We will present and discuss a dataset selected for our machine learning experiment. This will include some analysis and visualisations to give us a better understanding of what we're dealing with.
Experimental design - Before we conduct our experiment, we need to have a clear idea of what we're doing. It's important to know what we're looking for, how we're going to use our dataset, what algorithms we will be employing, and how we will determine whether the performance of our approach is successful.
Implementation - We will use the Keras API on top of TensorFlow to implement our experiment. All code will be in Python, and at the time of publishing everything is guaranteed to work within a Kaggle Notebook.
Results - Supported by figures and statistics, we will have a look at how our solution performed and discuss anything interesting about the results.
Implementation
In the last article we covered a number of experimental design issuesand made some decisions for our experiments. We decided to compare theperformance of two simple artificial neural networks on the Iris Flowerdataset. The first neural network will be the control arm, and it will consistof a single hidden layer of four neurons. The second neural network will bethe experimental arm, and it will consist of a single hidden layer of fiveneurons. We will train both of these using default configurations supplied bythe Keras library and collect thirty accuracy samples per arm. We will thenapply the Wilcoxon Rank Sums test to test the significance of our results.
A simple training and testing strategy
With our dataset analysis and experimental design complete, let's jumpstraight into coding up the experiments.
If your desired dataset is hosted on Kaggle, as it is with the Iris Flower Dataset, you can spin up a Kaggle Notebookeasily through the web interface:
Creating a Kaggle Notebook with the Iris dataset ready for use.
You're also welcome to use your own development environment, provided you canload the Iris Flower dataset.
Import packages
Before we can make use of the many libraries available for Python, we need toimport them into our notebook. We're going to need numpy , pandas ,tensorflow , keras , and sklearn. Depending on your developmentenvironment these may already be installed and ready for importing. You 'llneed to install them if that's not the case.
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import tensorflow as tf # dataflow programming
from tensorflow import keras # neural networks API
from sklearn.model_selection import train_test_split # dataset splitting
If you're using a Kaggle Kernel notebook you can just update the default cell.Below you can see I've included imports for tensorflow , keras , andsklearn.
To support those using their own coding environment, I have listed the versionnumbers for the imported packages below:
tensorflow==1.11.0rc1
scikit-learn==0.19.1
pandas==0.23.4
numpy==1.15.2
Preparing the dataset
First, we load the Iris Flower dataset into a pandas DataFrame using thefollowing code:
# Load iris dataset into dataframe
iris_data = pd.read_csv("/kaggle/input/Iris.csv")
Input parameters
Now, we need to separate the four input parameters from the classificationlabels. There are multiple ways to do this, but we're going to usepandas.DataFrame.iloc, which allows selection fromthe DataFrame using integer indexing.
# Splitting data into training and test set
X = iris_data.iloc[:,1:5].values
With the above code we have selected all the rows (indicated by the colon) andthe columns at index 1, 2, 3, and 4 (indicated by the 1:5). You may bewondering why the fifth column was not included, as we specified 1:5, that'sbecause in Python we're counting from one up to five, but not including it. Ifwe wanted the fifth column, we'd need to specify 1:6. It's important toremember that Python's indexing starts at 0, not 1. If we had specified 0:5,we would also be selecting the "Id" column.
To remind ourselves of what columns are at index 1, 2, 3, and 4, let's use thepandas.DataFrame.head() method from the first part.
Samples from the Iris Flower dataset with the column indices labelled in red.
We can also print out the contents of our new variable, X, which is storingall the Sepal Length/Width and Petal Length/Width data for our 150 samples.This is all of our input data.
The input data selected from the Iris Flower dataset.
For now, that is all the processing needed for the input parameters.
Classification labels
We know from our dataset analysis in part 1 that our samples are classifiedinto three categories, " Iris-setosa ", " Iris-virginica ", and " Iris-versicolor ". However, this alphanumeric representation of the labels is notcompatible with our machine learning functions, so we need to convert theminto something numeric.
Again, there are many ways to achieve a similar result, but let's use pandasfeatures for categorical data. By explicitly selecting the Species columnfrom our dataset as being of the category datatype, we can usepandas.Series.cat.codes to get numeric values forour class labels.
We have one extra step, because we plan on using the categorical_crossentropyobjective function to train our model. The Keras documentation gives thefollowing instructions:
When using the categorical_crossentropy loss, your targets should be in categorical format (e.g. if you have 10 classes, the target for each sample should be a 10-dimensional vector that is all-zeros except for a 1 at the index corresponding to the class of the sample).
Keras Documentation (https://keras.io/losses)
What this means is we will need to use One-hot encoding. This is quite typical forcategorical data which is to be used with machine learning algorithms. Here isan example of One-hot encoding using the Iris Flower dataset:
One-hot encoding of the Iris Flower dataset class labels.
You can see that each classification label has its own column, so Setosa is\(1,0,0\), Virginica is \(0,1,0\), and Versicolor is \(0,0,1\).
Luckily encoding our labels using Python and Keras is easy, and we've alreadycompleted the first step which is converting our alphanumeric classes tonumeric ones. To convert to One-hot encoding we can use keras.utils.to_categorical():
# Use One-hot encoding for class labels
Y = keras.utils.to_categorical(y,num_classes=None)
Training and testing split
In the previous part of this series we decided on the following:
The Iris Flower dataset is relatively small at exactly 150 samples. Because of this, we will use 70% of the dataset for training, and the remaining 30% for testing, otherwise our test set will be a little on the small side.
Machine Learning with Kaggle Notebooks - Part 2
This is where sklearn.model_selection.train_test_split()comes in. This function will split our dataset into a randomised training andtesting subset:
# split into randomised training and testing subset
X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=0.3,random_state=0)
This code splits the data, giving 30% (45 samples) to the testing set and theremaining 70% (105 samples) for the training set. The 30/70 split is definedusing test_size=0.3 and random_state=0 defines the seed for therandomisation of the subsets.
These have been spread across four new arrays storing the following data:
X_train : the input parameters, to be used for training.
y_train : the classification labels corresponding to the X_train above, to be used for training.
X_test : the input parameters, to be used for testing.
y_test : the classification labels corresponding to the X_test above, to be used for testing.
Before moving on, I recommend you have a closer look at the above four variables, so that you understand the division of the dataset.
Neural networks with Keras
Keras is the software library we will be using through Python, to code up andconduct our experiments. It's a user friendly high-level neural networkslibrary which in our case will be running on top of TensorFlow. What is mostattractive about Keras is how quickly you can go from your design to theresult.
Configuring the model
The keras.Sequential() model allowsyou to build a neural network by stacking layers. You can add layers using theadd() method, which in our case will be Dense() layers. A dense layer is alayer in which every neuron is connected to every neuron in the next layer.Dense() expects a number of parameters, e.g. the number of neurons to be onthe layer, the activation function, the input_shape (if it is the first layerin the model), etc.
model = keras.Sequential()
model.add(keras.layers.Dense(4, input_shape=(4,), activation='tanh'))
model.add(keras.layers.Dense(3, activation='softmax'))
In the above code we have created our empty model and then added two layers,the first is a hidden layer consisting of four neurons which are expectingfour inputs. The second layer is the output layer consisting of our threeoutput neurons.
We then need to configure our model for training, which is achieved using thecompile() method. Here we will specify our optimiser to be Adam(), configure for categoricalclassification, and specify our use of accuracy for the metric.
model.compile(keras.optimizers.Adam(), 'categorical_crossentropy', metrics=['accuracy'])
At this point, you may wish to use the summary() method to confirm you'vebuilt the model as intended:
Training the model
Now comes the actual training of the model! We're going to use the fit()method of the model and specify the training input data and desired labels,the number of epochs (the number of times the training algorithm sees theentire dataset), a flag to set the verbosity of the process to silent.Setting the verbosity to silent is entirely optional, but it helps us managethe notebook output.
model.fit(X_train, y_train, epochs=300, verbose=0)
If you're interested in receiving more feedback during the training (oroptimisation) process, you can remove the assignment of the verbose flagwhen invoking the fit() method to use the default value. Now when thetraining algorithm is being executed, you will see output at every epoch:
Testing the model
After the neural network has been trained, we want to evaluate it against ourtest set and output its accuracy. The evaluate() method returns a listcontaining the loss value at index 0 and in this case, the accuracy metric atindex 1.
accuracy = model.evaluate(X_test, y_test)[1]
If we run all the code up until this point, and we output the contents of ouraccuracy variable, we should see something similar to the following:
Generating all our results
Up until this point, we have successfully prepared the Iris Flower dataset,configured our model, trained our model, evaluated it using the test set, andreported its accuracy. However, this reported accuracy is only one sample ofour desired thirty.
We can do this with a simple loop to repeat the process thirty times, and alist to store all the results. This only requires some minor modifications toour existing code:
results_control_accuracy = []
for i in range(0,30):
model = keras.Sequential()
model.add(keras.layers.Dense(4, input_shape=(4,), activation='tanh'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(keras.optimizers.Adam(lr=0.04),'categorical_crossentropy',metrics=['accuracy'])
model.fit(X_train,y_train,epochs=100, verbose=0)
accuracy = model.evaluate(X_test, y_test)[1]
results_control_accuracy.append(accuracy)
print(results_control_accuracy)
This will take a few minutes to execute depending on whether you're using aKaggle Kernel notebook or your own development environment, but once it hasyou should see a list containing the accuracy results for all thirty of theexecutions (but your results will vary):
[0.9333333359824286, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.6000000052981906, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9111111124356588, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9111111124356588]
These are the results for our control arm, let's now do the same for ourexperimental arm. The experimental arm only has one difference: the number ofneurons on the hidden layer. We can re-use our code for the control arm andjust make a single modification where:
model.add(keras.layers.Dense(4, input_shape=(4,), activation='tanh'))
is changed to:
model.add(keras.layers.Dense(5, input_shape=(4,), activation='tanh'))
Of course, we'll also need to change the name of the list variable so that wedon't overwrite the results for our control arm. The code will end up lookinglike this:
results_experimental_accuracy = []
for i in range(0,30):
model = keras.Sequential()
model.add(keras.layers.Dense(5, input_shape=(4,), activation='tanh'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(keras.optimizers.Adam(lr=0.04),'categorical_crossentropy',metrics=['accuracy'])
model.fit(X_train,y_train,epochs=100, verbose=0)
accuracy = model.evaluate(X_test, y_test)[1]
results_experimental_accuracy.append(accuracy)
print(results_experimental_accuracy)
After executing the above and waiting a few minutes, we will have our secondset of results:
[0.9111111124356588, 0.9555555568801032, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.933333334657881, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.933333334657881, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9333333359824286, 0.9777777791023254, 0.9777777791023254, 0.9333333359824286, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254]
Saving the results
The results for our experiment have been generated, and it's important that wesave them somewhere, so we that can use them later. There are multipleapproaches to saving or persisting your data, but we are going to make use ofpandas.DataFrame.to_csv():
pd.DataFrame(results_control_accuracy).to_csv('results_control_accuracy.csv', index=False)
pd.DataFrame(results_experimental_accuracy).to_csv('results_experimental_accuracy.csv', index=False)
The above code will save your results to individual files corresponding to thearm of the experiment. Where the files go depend entirely on your developmentenvironment. If you're developing in your own local environment, then you willlikely find the files in the same folder as your notebook or script. If you'reusing a Kaggle Notebook, it is important that you click the blue commit buttonin the top right of the page.
It will take a few minutes to commit your notebook but once it's done, youknow your file is safe. It's not immediately obvious where the files have beenstored, but you can double check their existence by repeating the followingsteps:
Conclusion
In this article we prepared our dataset such that it was ready to be fed intoour neural network training and testing process. We then built and trained ourneural network models using Python and Keras, followed by some simpleautomation to generate thirty samples per arm of our experiment.
In the next part of this four-part series, we will have a look at how oursolutions performed and discuss anything interesting about the results. Thiswill include some visualisation, and we may even return to our experiment codeto produce some new results.
|
繝帙Ρ繧、繝医ヮ繧、繧コ繧剃ス懊k
繝帙Ρ繧、繝医ヮ繧、繧コ縺ッ縲√Λ繝ウ繝�繝�繝弱う繧コ縲�
繝ゥ繝ウ繝�繝�繝弱う繧コ縺ッ縲√ヵ繝シ繝ェ繧ィ螟画鋤縺励※繧ゅ⊇縺シ繝ゥ繝ウ繝�繝�繝弱う繧コ縲ゑシ亥ッセ謨ー繧ー繝ゥ繝輔〒謠上¥縺ィ縲√◎縺�縺ィ繧りィ�縺�蛻�繧後↑縺�縺後�ゑシ�
縲後�忤av繝輔ぃ繧、繝ォ繧剃ス懊k縲ゅ�阪→譖ク縺�縺溘′莉・蜑肴嶌縺�縺蘖ython縺ァ.wav繝輔ぃ繧、繝ォ繧貞�コ蜉幢シ域ィ呎コ悶�ョwave縺ィscipy.io.wavfile縺ョ騾溷コヲ豈碑シ�シ峨�ョ蜀吶@縺ァ縲『av繝輔ぃ繧、繝ォ繧剃ス懊l繧九�ョ縺ァ縲√◎縺薙i霎コ縺ッ逵∫払縲ゅ↑縺ョ縺ァ縲√�帙Ρ繧、繝医ヮ繧、繧コ縺ォ髢「縺励※縺ッ縲√Λ繝ウ繝�繝�繝弱う繧コ縺ョ繝励Ο繝�繝医□縺台ケ励○縺ィ縺上�ゅヶ繝ゥ繧ヲ繝ウ繧ゅヴ繝ウ繧ッ繧ょ酔讒倥��
莉・蜑阪�ッ縲『av繝輔ぃ繧、繝ォ縺ォsin繧ォ繝シ繝悶r蜈・繧後◆縺後�∽サ雁屓縺ッ縲√◎縺ョ驛ィ蛻�縺ョ縺ソ繝ゥ繝ウ繝�繝�繝弱う繧コ縺ォ螟峨∴縺溘j縲∝捉豕「謨ー繧偵>縺倥▲縺ヲ蜉�蟾・縺吶k縲�
繝ゥ繝ウ繝�繝�繝弱う繧コ縺ッ縲]umpy縺ョ讖溯�ス繧剃スソ縺」縺ヲ譖ク縺�縺溘��
import numpy as np
import matplotlib.pyplot as plt
stop = 10
Srate = 44100
x1=np.linspace(0, stop, Srate*stop)
A=0.5
y1 = np.random.rand(Srate * stop)-A
plt.plot(x1, y1)
plt.show()
#plt.plot(x1[:100], y1[:100])
#plt.show()
import numpy as np
import matplotlib.pyplot as plt
stop = 10
Srate = 44100
A=0.5
y1 = np.random.rand(Srate * stop)-A
f = np.linspace(1/float(stop), 44100, Srate * stop)
y11 = np.fft.fft(y1)/float(Srate * stop /2)
x1=np.linspace(0, stop, Srate*stop)
fig,((ax1,ax12),(ax2,ax22),(ax3,ax32))=plt.subplots(3,2)
ax1.plot(x1, y11)
ax12.plot(f, np.abs(y11))
ax22.plot(f, np.real(y11))
ax32.plot(f, np.imag(y11))
ax1.set_title("Input(white noise)")
ax12.set_title("abs")
ax22.set_title("real")
ax32.set_title("imag")
#plt.xscale("log")
#plt.yscale("log")
plt.tight_layout()
plt.show()
繝悶Λ繧ヲ繝九い繝ウ繝弱う繧コ繧ゆス懊k縲�
繝悶Λ繧ヲ繝九い繝ウ繝弱う繧コ縺ィ縺九�ョ1/f^2縺ィ縺�1/f繝弱う繧コ繧剃ス懊k縺ョ縺ッ髮」縺励>縲�
wikipedia縺ョ繝斐Φ繧ッ繝弱う繧コ縺ョ繝壹�シ繧ク繧定ヲ九k縺ィ縲√�後ヱ繝ッ繝シ縺悟捉豕「謨ー縺ォ蜿肴ッ比セ九☆繧矩尅髻ウ縺ョ縺薙→縲ゅ�阪→縺ゅk縲�
繝斐Φ繧ッ繝弱う繧コ繧剃ス懊k縺溘a縺ョ譁ケ豕輔→縺励※縲」oss豕輔→縺�縺�縺ョ縺後≠繧九i縺励>縲�
縺励°縺励�√≠繧薙∪繧企屮縺励>縺ョ縺ッ驕ソ縺代◆縺�縲ゅ�梧眠縺励>蜍牙シキ繧偵☆繧九�ョ繧よ凾髢鍋噪縺ォ縺。繧�縺」縺ィ窶ヲ縲ゅ�阪→諤昴▲縺溘�ョ縺ァ隲ヲ繧√◆縲�
縺昴%縺ァ縲∵�昴>縺、縺�縺溘d繧頑婿縺ァ繧�繧九��
繝帙Ρ繧、繝医ヮ繧、繧コ縺ョ菫。蜿キ繧剃スソ縺」縺ヲ
繝帙Ρ繧、繝医ヮ繧、繧コ繧断ft竊貞捉豕「謨ー遨コ髢薙〒蜉�蟾・竊段fft縺ァ蜈�縺ォ謌サ縺吶��
繧ゅ@縺薙�ョ繧�繧頑婿縺ォ蜷榊燕縺後↑縺�縺ョ縺ァ縺ゅl縺ー縲√�掲ourie (螟画鋤縺翫§縺輔s)豕輔�阪→縺ァ繧ゅ@縺ヲ縺翫¥縲�
縺ェ縺懊°繝帙Ρ繧、繝医ヮ繧、繧コ縺ョ縺セ縺セ縺ォ縺ェ縺」縺ヲ縺励∪縺�
縲碁俣驕輔>縺ェ縺丈ス懊▲縺溘�ョ縺ォ縲∝�コ縺ヲ縺上k髻ウ縺ッ繝帙Ρ繧、繝医ヮ繧、繧コ縺ョ縺昴l縺ェ繧薙□繧医↑縺≫�ヲ縲阪→諤昴▲縺ヲ縲√せ繝壹け繝医Ν隱ソ縺ケ縺溘i縲√�帙Ρ繧、繝医ヮ繧、繧コ縺�縺」縺溘��
縺昴�ョ縺ゅ→縲∝次蝗�縺後o縺九i縺ェ縺上※縲∝屁闍ヲ蜈ォ闍ヲ縺励◆縲�
蜴溷屏縺ッ縲√せ繝壹け繝医Ν縺ョ謚倥j霑斐@繧呈カ医@縺ヲ縺励∪縺」縺ヲ縺�縺溘°繧峨□縺」縺溘��
繧ケ繝壹け繝医Ν縺ッ縲∵釜繧願ソ斐@縺ョ謇�縺ァ縲∝だ縺阪′騾�縺ォ縺ェ繧九��
1/f^0=1�シ育┌蜃ヲ逅��シ壹�帙Ρ繧、繝医ヮ繧、繧コ�シ峨→1/f^2�シ医ヶ繝ゥ繧ヲ繝ウ繝弱う繧コ�シ峨�ョ髢薙′繝斐Φ繧ッ繝弱う繧コ繧峨@縺�縲�
繝翫う繧ュ繧ケ繝亥捉豕「謨ー縺ョ謇�縺ァ謚倥j霑斐☆縺ョ縺ァ縲’max/2縺ョ謇�縺ァ縲∝だ縺阪r騾�縺ォ縺励↑縺代l縺ー縺ェ繧峨↑縺�縲�
縺ィ繧翫≠縺医★縲√ヶ繝ゥ繧ヲ繝ウ繝弱う繧コ縺九i菴懊j縺溘>縲�
繝悶Λ繧ヲ繝ウ繝弱う繧コ縺」縺ス縺�1�シ叔^2縺ァ貂帙▲縺ヲ縺�繧狗キ壹r謠上¥縲�
import matplotlib.pyplot as plt
import numpy as np
stop = 10
Srate = 44100
f = np.linspace(1/10.0, 44100, Srate * stop)
f1 = 1.0/f**2
f1 = np.r_[f1[:len(f1)/2],f1[1:len(f1)/2+1][::-1]]
plt.plot(f,f1)
plt.yscale("log")
plt.xlabel("frq. in log scale")
plt.xscale("log")
plt.ylabel("mag. in log scale")
plt.show()
log繧ケ繧ア繝シ繝ォ縺ァ逶エ邱壹↓縺ェ縺」縺溘��
縺溘□縲�1Hz莉・荳九�ョ驛ィ蛻�縺悟、ァ縺阪☆縺弱k縲�
縺ゅ→縲∝セ瑚ソー縺吶k縺後��20Hz莉・荳九�ッ縺�繧峨s縺ョ縺ァ縲�20Hz縺ョ謇�縺�1縺ォ縺ェ繧九h縺�縺ォ縺励◆縺�縲�
import matplotlib.pyplot as plt
import numpy as np
stop = 10
Srate = 44100
f = np.linspace(1/10.0, 44100, Srate * stop)
f1 = 1.0/f**2
v = f1[20*stop]
f1 = f1/v
f1 = np.r_[f1[:len(f1)/2],f1[1:len(f1)/2+1][::-1]]
plt.plot(f,f1)
plt.yscale("log")
plt.xlabel("frq. in log scale")
plt.xscale("log")
plt.ylabel("mag. in log scale")
plt.show()
縺薙l繧偵�√�帙Ρ繧、繝医ヮ繧、繧コ縺ォ縺九¢蜷医o縺帙◆縺�縲�
import numpy as np
import matplotlib.pyplot as plt
stop = 10
Srate = 44100
A=0.5
y1 = np.random.rand(Srate * stop)-A
f = np.linspace(1/float(stop), 44100, Srate * stop)
f1 = 1.0/f**2
v = f1[20*stop]
f1 = f1/v
f1 = np.r_[f1[:len(f1)/2],f1[1:len(f1)/2+1][::-1]]
y11 = np.abs(np.fft.fft(y1)*f1)/float(Srate * stop /2)
plt.plot(f, y11)
plt.xscale("log")
plt.yscale("log")
plt.show()
蜿ッ閨エ蝓溘�ョ蜻ィ豕「謨ー縺ァ縲∝シキ蠎ヲ縺御ス弱☆縺弱k縲�
菴主捉豕「縺悟シキ縺吶℃繧九�ゑシ医←縺�縺帙せ繝斐�シ繧ォ繝シ縺ァ縺ッ蜀咲函縺ァ縺阪↑縺�蜻ィ豕「謨ー縺悟シキ縺丞�コ縺吶℃縺ヲ縲√⊇縺九�ョ髻ウ縺悟ー上&縺�縲ゅ%繧後�ッ縺薙l縺ァ縲√ヴ繝ウ繧ッ繝弱う繧コ縺ァ縺ッ縺ゅk縺ョ縺�繧阪≧縺代←縲ゑシ�
隧ヲ縺励↓縲(fft縺九¢繧九→
y11 = np.fft.ifft(np.fft.fft(y1)*f1)
plt.plot(np.real(y11))
plt.show()
繧ケ繝斐�シ繧ォ繝シ縺ョ蜀咲函縺ァ縺阪k蜻ィ豕「謨ー縺ォ邨槭k�シ�20縲�20kHz菴阪□縺ィ諤昴▲縺ヲ縺�繧九�ゑシ�
繧ケ繝壹け繝医Ν縺ョ縲∬オ、縺ァ蝗イ縺」縺滄�伜沺縺ッ縺�繧峨↑縺�縺ョ縺ァ縲�0縺ァ鄂ョ謠帙��
縺昴l莉・螟悶�ョ蜻ィ豕「謨ー縺ッ縲�20 Hz縺ョ蛟、縺後��1莉・荳九↓蜿弱∪繧九h縺�縺ォ縺吶k縲�
import numpy as np
import matplotlib.pyplot as plt
stop = 10
Srate = 44100
A=0.5
y1 = np.random.rand(Srate * stop)-A
f = np.linspace(1/float(stop), 44100, Srate * stop)
f1 = 1.0/f**2
v = f1[20*stop]
f1 = f1/v
f1[:stop*20]=0
f1[stop*20000:]=0
f1 = np.r_[f1[:len(f1)/2],f1[1:len(f1)/2+1][::-1]]
f11 = f1
y11 = np.abs(np.fft.fft(y1)*f11)
plt.plot(f, y11/float(Srate*stop/2))
plt.xscale("log")
plt.yscale("log")
plt.show()
縺薙≧縺ェ縺」縺溘�ゅ%繧後r縲∽ク翫£繧九��
import numpy as np
import matplotlib.pyplot as plt
stop = 10
Srate = 44100
A=0.5
y1 = np.random.rand(Srate * stop)-A
f = np.linspace(1/float(stop), 44100, Srate * stop)
f1 = 1.0/f**2
v = f1[20*stop]
f1 = f1/v
f1[:stop*20]=0
f1[stop*20000:]=0
f1 = np.r_[f1[:len(f1)/2],f1[1:len(f1)/2+1][::-1]]
f11 = f1
y11 = np.fft.fft(y1)*f11
y11 = y11/abs(y11[20*stop])*float(Srate*stop/2)
y11 = np.abs(y11)/float(Srate*stop/2)
plt.plot(f, y11)
plt.xscale("log")
plt.yscale("log")
plt.show()
騾�螟画鋤縺吶k縺ィ縲�
import numpy as np
import matplotlib.pyplot as plt
stop = 10
Srate = 44100
A=0.5
x1=np.linspace(0, stop, Srate*stop)
y1 = np.random.rand(Srate * stop)-A
f = np.linspace(1/float(stop), 44100, Srate * stop)
f1 = 1.0/f**2
v = f1[20*stop]
f1 = f1/v
f1[:stop*20]=0
f1[stop*20000:]=0
f1 = np.r_[f1[:len(f1)/2],f1[1:len(f1)/2+1][::-1]]
f11 = f1
y11 = np.fft.fft(y1)*f11
y11 = y11/abs(y11[20*stop])*float(Srate*stop/2)
y11 = np.real(np.fft.ifft(y11))/stop/2.0
plt.plot(x1, y11)
plt.show()
繝斐Φ繧ッ繝弱う繧コ縺ッ縲�
蜻ィ豕「謨ー逧�縺ォ縺ッ
import matplotlib.pyplot as plt
import numpy as np
stop = 10
Srate = 44100
f = np.linspace(1/10.0, 44100, Srate * stop)
f1 = 1.0/f
v = f1[20*stop]
f1 = f1/v
f1 = np.r_[f1[:len(f1)/2],f1[1:len(f1)/2+1][::-1]]
plt.plot(f,f1)
plt.yscale("log")
plt.xlabel("frq. in log scale")
plt.xscale("log")
plt.ylabel("mag. in log scale")
plt.show()
縺ァ縺阪※縺阪◆繝弱う繧コ縺ョ繧ケ繝壹け繝医Ν縺ッ縲�
import numpy as np
import matplotlib.pyplot as plt
stop = 10
Srate = 44100
A=0.5
y1 = np.random.rand(Srate * stop)-A
f = np.linspace(1/float(stop), 44100, Srate * stop)
f1 = 1.0/f
v = f1[20*stop]
f1 = f1/v
f1[:stop*20]=0
f1[stop*20000:]=0
f1 = np.r_[f1[:len(f1)/2],f1[1:len(f1)/2+1][::-1]]
f11 = f1
y11 = np.fft.fft(y1)*f11
y11 = y11/abs(y11[20*stop])*float(Srate*stop/2)
y11 = np.abs(y11)/float(Srate*stop/2)
plt.plot(f, y11)
plt.xscale("log")
plt.yscale("log")
plt.show()
import numpy as np
import matplotlib.pyplot as plt
stop = 10
Srate = 44100
A=0.5
x1=np.linspace(0, stop, Srate*stop)
y1 = np.random.rand(Srate * stop)-A
f = np.linspace(1/float(stop), 44100, Srate * stop)
f1 = 1.0/f
v = f1[20*stop]
f1 = f1/v
f1[:stop*20]=0
f1[stop*20000:]=0
f1 = np.r_[f1[:len(f1)/2],f1[1:len(f1)/2+1][::-1]]
f11 = f1
y11 = np.fft.fft(y1)*f11
y11 = y11/abs(y11[20*stop])*float(Srate*stop/2)
y11 = np.real(np.fft.ifft(y11))/stop/2.0
plt.plot(x1, y11)
plt.show()
縺薙l繧ゑシ代h繧雁、ァ縺阪¥縺ェ縺」縺。繧�縺」縺溘��
縺薙≧縺励※縺ァ縺阪◆豕「蠖「繧偵�『av繝輔ぃ繧、繝ォ縺ォ蜈・繧後k縺ョ縺ァ縺ゅ▲縺溘�ゅ�ゅ��
縺薙�ョ蠕後�『av菴懊▲縺溘¢縺ゥ窶ヲ
縺薙�ョ蠕後�『av繝輔ぃ繧、繝ォ縺ォ縺励◆縲�
縺励◆縺代←縲√�√�√←縺�繧ゅ�√%繧後□縺代§繧�繝�繝。縺ェ繧医≧縺�縲ゆソ。蜿キ縺ョ譛�螟ァ蛟、縺碁ォ倥☆縺弱※縲∫�エ陬る浹縺後☆繧九��
繧ゅ≧蟆代@縲∝�ィ菴薙�ョ繝ャ繝吶Ν繧定ェソ遽�縺励↑縺�縺ィ縲�
縺ゅ→縲√�檎オ仙ア�閨槭>縺ヲ縺�繧九�ョ縺」縺ヲ縲�鬮伜捉豕「髻ウ縲阪→縺�縺�縺ョ縺後≠繧九�るォ伜捉豕「縺後↑縺�縺ィ縲∝�ィ辟カ閨槭%縺医↑縺�縺励��
縺励�ー繧峨¥譖ク縺九↑縺�縺、繧ゅj縺�縺」縺溘′縲√○縺」縺九¥諤昴>縺、縺�縺溘�ョ縺ァ譖ク縺�縺ヲ縺翫¥縲�
荳ュ騾溘ヵ繝シ繝ェ繧ィ螟画鋤 窶セ髮「謨」繝輔�シ繝ェ繧ィ螟画鋤繧医j..
譁ュ髱「莠梧ャ。繝「繝シ繝。繝ウ繝医r縲∝コァ讓咏せ縺ョ驟榊�励°繧芽ィ�..
譁ュ髱「莠梧ャ。繝「繝シ繝。繝ウ繝医r縲∝コァ讓咏せ縺ョ驟榊�励°繧芽ィ�..
font繝輔ぃ繧、繝ォ縺ョ譁�蟄励ョ繝シ繧ソ�シ医げ繝ェ繝包シ峨r..
matplotlib縺ョpyplot.pl..
險育ョ怜鴨蟄ヲ謚�陦楢��隧ヲ鬨薙�ョ蝠城。碁寔 閾ェ轤奇シ郁」∵妙竊�..
閼ウ繝峨ャ繧ー縺ォ陦後▲縺ヲ縺阪◆縲や�樽RI縺ョ逕サ蜒上ョ繝シ..
matplotlib縺ョimshow縺ァ蜀�繧�..
matplotlib縺ョ縲…map繧偵�∝セ舌��..
matplotlib縺ョmake_axes..
matplotlib floatinga..
matplotlib plot縺ョ濶イ繧偵�∝�、..
Python縺ァ縲√�御コ梧ャ。蜈�繝輔�シ繝ェ繧ィ螟画鋤縺励◆..
matplotlib縺ョlinestyle..
縺ゥ縺。繧峨′豁」縺励>RGB縺九�ゑシ�matplot..
matplotlib縺ョannotate縺ョ..
matplotlib縺ァ縲』霆ク縺ィ�ス呵サク縺ョ謨ー蟄�..
VBA縺ァ縲}ython縺ョrange縺ィ縺九��..
matplotlib縺ョaxes3D縺ァ縲‖..
matplotlib縺ョlatex縺ァ縲∬。悟��..
|
ngx-semantic-version: enhance your git and release workflow
06.11.2019
In this article I will introduce the new tool ngx-semantic-version.This new Angular Schematic allows you to set up all necessary tooling for consistent git commit messages and publishing new versions.It will help you to keep your CHANGELOG.md file up to date and to release new tagged versions. All this is done by leveraging great existing tools like commitizen, commitlint and standard-version.
Table of contents:
TL;DR
Introduction
What does it do?
How to use ngx-semantic-version
Conclusion
TL;DR
ngx-semantic-version is an Angular Schematic that will add and configure commitlint, commitizen, husky and standard-version to enforce commit messages in the conventional commit format and to automate your release and Changelog generation by respecting semver.All you have to do for the setup is to execute this command in your Angular CLI project:
ng add ngx-semantic-version
Introduction
Surviving in the stressful day-to-day life of a developer is not easy. One feature follows the other, bug fixes and breaking changes come in on a regular basis. With all the hustle and bustle, there's literally no time to write proper commit messages.
If we don't take this job serious, at the end of the day our git history will look like this:
* 65f597a (HEAD -> master) adjust readme
* f874d16 forgot to bump up version
* 3fa9f1e release
* d09e4ee now it's fixed!
* 70c7a9b this should really fix the build
* 5f91dab let the build work (hopefully)
* 44c45b7 adds some file
* 7ac82d3 lots of stuff
* 1e34db6 initial commit
When you see such a history you know almost nothing: neither what features have been integrated nor if there was a bugfix or a breaking change. There is almost no meaningful context.
Wouldn't it be nice to have a cleaner git history that will follow a de facto standard which is commonly used?
But more than this: having a clean and well-formatted git history can help us releasing new software versions respecting semantic versioning and generating a changelog that includes all the changes we made and references to the commits.
No more struggle with forgotten version increasements in your package.json. No more manual changes in the CHANGELOG.md and missing references to necessary git commits. Wouldn't it be nice to automate the release process and generate the changelog and the package version by just checking and building it from a clean git history? And wouldn't it be nice to add all this stuff with one very simple single line command to your Angular project?
ngx-semantic-version will give you all that.
ngx-semantic-version will add and configure the following packages for you.We will take a look at each tool in this article.
commitlint:check commit messages to follow the conventional commit pattern
husky:hook into git events and run code at specific points (e.g. at commit or push)
commitizen:helper for writing conventional commit messages
standard-version:generate conventional changelogs from the git history
Commitlint will give you the ability to check your commit messages for a common pattern. A very prominent project following this pattern is the Angular repository itself. The conventional-commit pattern requires us to follow this simple syntax:
<type>[optional scope]: <description>[optional body][optional footer]
Let's see what is the meaning of these parameters:
typecan be one of the following codes:
build
ci
chore
docs
feat
fix
perf
refactor
revert
style
test
scopeis optional and can be used to reference a specific part of your application, e.g.fix(dashboard): add fallback for older browsers
The descriptionis mandatory and describes the commit in a very short form (also calledsubject)
If necessary, a bodyand afooterwith further information can be added which may contain:
The keyword BREAKING CHANGESfollowed by a description of the breaking changes
A reference to a GitHub issue (or any other references, such as JIRA ticket number)
The keyword
An example message could look like that:
refactor(footer): move footer widget into separate module
BREAKING CHANGES
The footer widget needs to be imported from `widgets/FootWidgetModule` instead of `common` now.
closes #45
Following this pattern allows us to extract valuable information from the git history later. We can generate a well-formatted changelog file without any manual effort. It can easily be determined what version part will be increased and much more.
You may think now: "Wow, that style looks complicated and hard to remember." But don't worry: you will get used to it soon! In a second you will see how creating these messages can be simplified using
commitizen.
If you want to try you commitlint separately, you can even try it out using npx:
ngx-semantic-version will add the configuration file commitlint.config.js which can be adjusted later by your personal needs.
Husky allows us to hook into the git lifecycle using Node.js.We can use husky in combination with commitlint to check a commit message right before actually commiting it.This is what ngx-semantic-version configures in our application.It will add this part to your package.json:
...
"husky": {
"hooks": {
"commit-msg": "commitlint -E HUSKY_GIT_PARAMS"
}
},
Husky uses the environment variable HUSKY_GIT_PARAMS containing the current git message you entered and it will pass this through commitlint so it can be evaluated.
Whenever you commit, commitlint will now automatically check your message.
Defining a well-formed message text can be quite hard when you are not used to the conventional-changelog style.The tool commitizen is there to help beginners and to prevent your own negligence.It introduces a lots of restrictions for our commit messages so that it's easier for developers to follow the pattern.Commitizen will help you to always define a commit message in the appropriate format using an interactive CLI:
When adding ngx-semantic-version it will configure commitizen to use the conventional changelog style as well:
// package.json
...
"config": {
"commitizen": {
"path": "./node_modules/cz-conventional-changelog"
}
}
If you are using Visual Studio Code, you can also use the extension Visual Studio Code Commitizen Support which will let you type the commit message directly in the editor:
Standard-version is the cherry on the cake and takes advantage of a well-formed git history.It will extract the commit message information like fix, feature and BREAKING CHANGES and use this information to automatically create a CHANGELOG.md file.The tool will also determine the next version number for the project, according to the rules of semantic versioning.
ngx-semantic-version will configure a new script in your package.json that can be used for releasing a new version:
...
"scripts": {
"release": "standard-version",
},
Whenever you want to release a version, you should use standard-version to keep your versioning clean and the CHANGELOG.md up-to-date.Furthermore, it references both commits and closed issues in your CHANGELOG.md, so that it's easier to understand what is part of in the release.The tool will also tag the version in the git repo so that all versions will be available as releases via GitHub, Gitlab or whatever you are using.
Are you excited, too? Then let's get started!Configuring all mentioned tools manually can be quite tedious.Here is where ngx-semantic-version enters the game: It is an Angular schematic that will add and configure all the tools for you.
All we need it to run the following command:
ng add ngx-semantic-version
After installation, your package.json file will be updated.You will also find a new file commitlint.config.js which includes the basic rule set for conventional commits.You can adjust the configuration to satisfy your needs even more.
Try it out and make some changes to your project!Commitlint will now check the commit message and tell you if it is valid or not.It prevents you from commiting with a "bad" message.To make things easier, commitizen will support you by building the message in the right format and it even explicitly asks you for issue references and breaking changes.
If you typically use npm version to cut a new release, now you do this instead:
npm run release
You should also consider using one of the following commands:
npm run release -- --first-release # create the initial release and create the `CHANGELOG.md`
npm run release -- --prerelease # create a pre-release instead of a regular one
standard-version will now do the following:
"Bump" the version in package.json
Update the CHANGELOG.mdfile
Commit the package.jsonandCHANGELOG.mdfiles
Tag a new release in the git history
Check out the official documentation of standard-version for further information.
Conclusion
I hope that ngx-semantic-version will make your daily work easier!If you have a problem, please feel free to open an issue.And if you have any improvements, I'm particularly happy about a pull request.
Happy coding, committing and releasing!
Thank you
Suggestions? Feedback? Bugs? Please
|
Devel::Peek - A data debugging tool for the XS programmer
use Devel::Peek;
Dump( $a );
Dump( $a, 5 );
Dump( @a );
Dump( %h );
DumpArray( 5, $a, $b, ... );
mstat "Point 5";
use Devel::Peek ':opd=st';
Devel::Peek contains functions which allows raw Perl datatypes to be manipulated from a Perl script. This is used by those who do XS programming to check that the data they are sending from C to Perl looks as they think it should look. The trick, then, is to know what the raw datatype is supposed to look like when it gets to Perl. This document offers some tips and hints to describe good and bad raw data.
It is very possible that this document will fall far short of being useful to the casual reader. The reader is expected to understand the material in the first few sections of perlguts.
Devel::Peek supplies a Dump() function which can dump a raw Perl datatype, and mstat("marker") function to report on memory usage (if perl is compiled with corresponding option). The function DeadCode() provides statistics on the data "frozen" into inactive CV. Devel::Peek also supplies SvREFCNT() which can query reference counts on SVs. This document will take a passive, and safe, approach to data debugging and for that it will describe only the Dump() function.
The Dump() function takes one or two arguments: something to dump, and an optional limit for recursion and array elements (default is 4). The first argument is evaluted in rvalue scalar context, with exceptions for @array and %hash, which dump the array or hash itself. So Dump @array works, as does Dump $foo. And Dump pos will call pos in rvalue context, whereas Dump ${\pos} will call it in lvalue context.
Function DumpArray() allows dumping of multiple values (useful when you need to analyze returns of functions).
The global variable $Devel::Peek::pv_limit can be set to limit the number of character printed in various string values. Setting it to 0 means no limit.
If use Devel::Peek directive has a :opd=FLAGS argument, this switches on debugging of opcode dispatch. FLAGS should be a combination of s, t, and P (see -D flags in perlrun). :opd is a shortcut for :opd=st.
CvGV($cv) return one of the globs associated to a subroutine reference $cv.
debug_flags() returns a string representation of $^D (similar to what is allowed for -D flag). When called with a numeric argument, sets $^D to the corresponding value. When called with an argument of the form "flags-flags", set on/off bits of $^D corresponding to letters before/after -. (The returned value is for $^D before the modification.)
runops_debug() returns true if the current opcode dispatcher is the debugging one. When called with an argument, switches to debugging or non-debugging dispatcher depending on the argument (active for newly-entered subs/etc only). (The returned value is for the dispatcher before the modification.)
When perl is compiled with support for memory footprint debugging (default with Perl's malloc()), Devel::Peek provides an access to this API.
Use mstat() function to emit a memory state statistic to the terminal. For more information on the format of output of mstat() see "Using $ENV{PERL_DEBUG_MSTATS}" in perldebguts.
Three additional functions allow access to this statistic from Perl. First, use mstats_fillhash(%hash) to get the information contained in the output of mstat() into %hash. The field of this hash are
minbucket nbuckets sbrk_good sbrk_slack sbrked_remains sbrksstart_slack topbucket topbucket_ev topbucket_odd total total_chaintotal_sbrk totfree
Two additional fields free, used contain array references which provide per-bucket count of free and used chunks. Two other fields mem_size, available_size contain array references which provide the information about the allocated size and usable size of chunks in each bucket. Again, see "Using $ENV{PERL_DEBUG_MSTATS}" in perldebguts for details.
Keep in mind that only the first several "odd-numbered" buckets are used, so the information on size of the "odd-numbered" buckets which are not used is probably meaningless.
The information in
mem_size available_size minbucket nbuckets
is the property of a particular build of perl, and does not depend on the current process. If you do not provide the optional argument to the functions mstats_fillhash(), fill_mstats(), mstats2hash(), then the information in fields mem_size, available_size is not updated.
fill_mstats($buf) is a much cheaper call (both speedwise and memory-wise) which collects the statistic into $buf in machine-readable form. At a later moment you may need to call mstats2hash($buf, %hash) to use this information to fill %hash.
All three APIs fill_mstats($buf), mstats_fillhash(%hash), and mstats2hash($buf, %hash) are designed to allocate no memory if used the second time on the same $buf and/or %hash.
So, if you want to collect memory info in a cycle, you may call
$#buf = 999;
fill_mstats($_) for @buf;
mstats_fillhash(%report, 1); # Static info too
foreach (@buf) {
# Do something...
fill_mstats $_; # Collect statistic
}
foreach (@buf) {
mstats2hash($_, %report); # Preserve static info
# Do something with %report
}
The following examples don't attempt to show everything as that would be a monumental task, and, frankly, we don't want this manpage to be an internals document for Perl. The examples do demonstrate some basics of the raw Perl datatypes, and should suffice to get most determined people on their way. There are no guidewires or safety nets, nor blazed trails, so be prepared to travel alone from this point and on and, if at all possible, don't fall into the quicksand (it's bad for business).
Oh, one final bit of advice: take perlguts with you. When you return we expect to see it well-thumbed.
Let's begin by looking a simple scalar which is holding a string.
use Devel::Peek;
$a = 42; $a = "hello";
Dump $a;
The output:
SV = PVIV(0xbc288) at 0xbe9a8
REFCNT = 1
FLAGS = (POK,pPOK)
IV = 42
PV = 0xb2048 "hello"\0
CUR = 5
LEN = 8
This says $a is an SV, a scalar. The scalar type is a PVIV, which is capable of holding an integer (IV) and/or a string (PV) value. The scalar's head is allocated at address 0xbe9a8, while the body is at 0xbc288. Its reference count is 1. It has the POK flag set, meaning its current PV field is valid. Because POK is set we look at the PV item to see what is in the scalar. The \0 at the end indicate that this PV is properly NUL-terminated. Note that the IV field still contains its old numeric value, but because FLAGS doesn't have IOK set, we must ignore the IV item. CUR indicates the number of characters in the PV. LEN indicates the number of bytes allocated for the PV (at least one more than CUR, because LEN includes an extra byte for the end-of-string marker, then usually rounded up to some efficient allocation unit).
If the scalar contains a number the raw SV will be leaner.
use Devel::Peek;
$a = 42;
Dump $a;
The output:
SV = IV(0xbc818) at 0xbe9a8
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 42
This says $a is an SV, a scalar. The scalar is an IV, a number. Its reference count is 1. It has the IOK flag set, meaning it is currently being evaluated as a number. Because IOK is set we look at the IV item to see what is in the scalar.
If the scalar from the previous example had an extra reference:
use Devel::Peek;
$a = 42;
$b = \$a;
Dump $a;
The output:
SV = IV(0xbe860) at 0xbe9a8
REFCNT = 2
FLAGS = (IOK,pIOK)
IV = 42
Notice that this example differs from the previous example only in its reference count. Compare this to the next example, where we dump $b instead of $a.
This shows what a reference looks like when it references a simple scalar.
use Devel::Peek;
$a = 42;
$b = \$a;
Dump $b;
The output:
SV = IV(0xf041c) at 0xbe9a0
REFCNT = 1
FLAGS = (ROK)
RV = 0xbab08
SV = IV(0xbe860) at 0xbe9a8
REFCNT = 2
FLAGS = (IOK,pIOK)
IV = 42
Starting from the top, this says $b is an SV. The scalar is an IV, which is capable of holding an integer or reference value. It has the ROK flag set, meaning it is a reference (rather than an integer or string). Notice that Dump follows the reference and shows us what $b was referencing. We see the same $a that we found in the previous example.
Note that the value of RV coincides with the numbers we see when we stringify $b. The addresses inside IV() are addresses of X*** structures which hold the current state of an SV. This address may change during lifetime of an SV.
This shows what a reference to an array looks like.
use Devel::Peek;
$a = [42];
Dump $a;
The output:
SV = IV(0xc85998) at 0xc859a8
REFCNT = 1
FLAGS = (ROK)
RV = 0xc70de8
SV = PVAV(0xc71e10) at 0xc70de8
REFCNT = 1
FLAGS = ()
ARRAY = 0xc7e820
FILL = 0
MAX = 0
ARYLEN = 0x0
FLAGS = (REAL)
Elt No. 0
SV = IV(0xc70f88) at 0xc70f98
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 42
This says $a is a reference (ROK), which points to another SV which is a PVAV, an array. The array has one element, element zero, which is another SV. The field FILL above indicates the last element in the array, similar to $#$a.
If $a pointed to an array of two elements then we would see the following.
use Devel::Peek 'Dump';
$a = [42,24];
Dump $a;
The output:
SV = IV(0x158c998) at 0x158c9a8
REFCNT = 1
FLAGS = (ROK)
RV = 0x1577de8
SV = PVAV(0x1578e10) at 0x1577de8
REFCNT = 1
FLAGS = ()
ARRAY = 0x1585820
FILL = 1
MAX = 1
ARYLEN = 0x0
FLAGS = (REAL)
Elt No. 0
SV = IV(0x1577f88) at 0x1577f98
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 42
Elt No. 1
SV = IV(0x158be88) at 0x158be98
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 24
Note that Dump will not report all the elements in the array, only several first (depending on how deep it already went into the report tree).
The following shows the raw form of a reference to a hash.
use Devel::Peek;
$a = {hello=>42};
Dump $a;
The output:
SV = IV(0x8177858) at 0x816a618
REFCNT = 1
FLAGS = (ROK)
RV = 0x814fc10
SV = PVHV(0x8167768) at 0x814fc10
REFCNT = 1
FLAGS = (SHAREKEYS)
ARRAY = 0x816c5b8 (0:7, 1:1)
hash quality = 100.0%
KEYS = 1
FILL = 1
MAX = 7
RITER = -1
EITER = 0x0
Elt "hello" HASH = 0xc8fd181b
SV = IV(0x816c030) at 0x814fcf4
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 42
This shows $a is a reference pointing to an SV. That SV is a PVHV, a hash. Fields RITER and EITER are used by "each" in perlfunc.
The "quality" of a hash is defined as the total number of comparisons needed to access every element once, relative to the expected number needed for a random hash. The value can go over 100%.
The total number of comparisons is equal to the sum of the squares of the number of entries in each bucket. For a random hash of <n> keys into <k> buckets, the expected value is:
n + n(n-1)/2k
The Dump() function, by default, dumps up to 4 elements from a toplevel array or hash. This number can be increased by supplying a second argument to the function.
use Devel::Peek;
$a = [10,11,12,13,14];
Dump $a;
Notice that Dump() prints only elements 10 through 13 in the above code. The following code will print all of the elements.
use Devel::Peek 'Dump';
$a = [10,11,12,13,14];
Dump $a, 5;
This is what you really need to know as an XS programmer, of course. When an XSUB returns a pointer to a C structure that pointer is stored in an SV and a reference to that SV is placed on the XSUB stack. So the output from an XSUB which uses something like the T_PTROBJ map might look something like this:
SV = IV(0xf381c) at 0xc859a8
REFCNT = 1
FLAGS = (ROK)
RV = 0xb8ad8
SV = PVMG(0xbb3c8) at 0xc859a0
REFCNT = 1
FLAGS = (OBJECT,IOK,pIOK)
IV = 729160
NV = 0
PV = 0
STASH = 0xc1d10 "CookBookB::Opaque"
This shows that we have an SV which is a reference, which points at another SV. In this case that second SV is a PVMG, a blessed scalar. Because it is blessed it has the OBJECT flag set. Note that an SV which holds a C pointer also has the IOK flag set. The STASH is set to the package name which this SV was blessed into.
The output from an XSUB which uses something like the T_PTRREF map, which doesn't bless the object, might look something like this:
SV = IV(0xf381c) at 0xc859a8
REFCNT = 1
FLAGS = (ROK)
RV = 0xb8ad8
SV = PVMG(0xbb3c8) at 0xc859a0
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 729160
NV = 0
PV = 0
Looks like this:
SV = IV(0x24d2dd8) at 0x24d2de8
REFCNT = 1
FLAGS = (TEMP,ROK)
RV = 0x24e79d8
SV = PVCV(0x24e5798) at 0x24e79d8
REFCNT = 2
FLAGS = ()
COMP_STASH = 0x22c9c50 "main"
START = 0x22eed60 ===> 0
ROOT = 0x22ee490
GVGV::GV = 0x22de9d8 "MY" :: "top_targets"
FILE = "(eval 5)"
DEPTH = 0
FLAGS = 0x0
OUTSIDE_SEQ = 93
PADLIST = 0x22e9ed8
PADNAME = 0x22e9ec0(0x22eed00) PAD = 0x22e9ea8(0x22eecd0)
OUTSIDE = 0x22c9fb0 (MAIN)
This shows that
the subroutine is not an XSUB (since START and ROOT are non-zero, and XSUB is not listed, and is thus null);
that it was compiled in the package main;
under the name MY::top_targets;
inside a 5th eval in the program;
it is not currently executed (see DEPTH);
it has no prototype (PROTOTYPE field is missing).
Dump, mstat, DeadCode, DumpArray, DumpWithOP and DumpProg, fill_mstats, mstats_fillhash, mstats2hash by default. Additionally available SvREFCNT, SvREFCNT_inc and SvREFCNT_dec.
Readers have been known to skip important parts of perlguts, causing much frustration for all.
Ilya Zakharevich ilya@math.ohio-state.edu
Copyright (c) 1995-98 Ilya Zakharevich. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Author of this software makes no claim whatsoever about suitability, reliability, edability, editability or usability of this product, and should not be kept liable for any damage resulting from the use of it. If you can use it, you are in luck, if not, I should not be kept responsible. Keep a handy copy of your backup tape at hand.
|
Få SMS-leveransstatus med Python
Få SMS-leveransstatus med Python
Skapa grupp med Python
Hämta gruppfältlista med Python
Lägg till ett fält i en grupp med Python
Radera ett fält från en grupp med Python
Radera en kontakt från en grupp
Assistera land till e-grupp med Python
Hämta gruppkontaktlista med Python
Lägg till kontakt i en grupp med Python
Ändra kontakt med en grupp med Python
Radera kontakt med en grupp med Python
Få SMS-leveransstatus med Python
import urllib2
afilnet_class="sms"
afilnet_method="getdeliverystatus"
afilnet_user="user"
afilnet_password="password"
afilnet_messages="123456,123457,123458"
afilnet_output=""
# Create an URL request
sUrl = "https://www.afilnet.com/api/http/?class="+afilnet_class+"&method="+afilnet_method+"&user="+afilnet_user+"&password="+afilnet_password+"&messages="+afilnet_messages+"&output="+afilnet_output
result = urllib2.urlopen(sUrl).read()
from urllib.request import urlopen
from urllib.parse import urlencode
afilnet_class="sms"
afilnet_method="getdeliverystatus"
afilnet_user="user"
afilnet_password="password"
afilnet_messages="123456,123457,123458"
afilnet_output=""
# Create an URL request
sUrl = "https://www.afilnet.com/api/http/"
data = urlencode({"class": afilnet_class,"method": afilnet_method,"user": afilnet_user,"password": afilnet_password,"messages": afilnet_messages,"output": afilnet_output}).encode("utf-8")
result = urlopen(sUrl, data).read()
print(result)
import requests
afilnet_class="sms"
afilnet_method="getdeliverystatus"
afilnet_user="user"
afilnet_password="password"
afilnet_messages="123456,123457,123458"
afilnet_output=""
# Create an URL request
sUrl = "https://www.afilnet.com/api/basic/?class="+afilnet_class+"&method="+afilnet_method+"&messages="+afilnet_messages+"&output="+afilnet_output
result = requests.get(sUrl,auth=requests.auth.HTTPBasicAuth(afilnet_user,afilnet_password))
print(result.text)
Parameter Beskrivning Obligatorisk / valfri
class=sms Klass begärt: Klass som begäran görs till Obligatorisk
method=getdeliverystatus Klassmetod begärd: Metod för den klass som begäran görs till Obligatorisk
user Användare och e-post till ditt Afilnet-konto Obligatorisk
password Lösenord för ditt Afilnet-konto Obligatorisk
messages Lista över sändningsidentifierare separerade med komma (,) Obligatorisk
output Outputformat för resultatet Valfri
När du gör förfrågningar får du följande fält:
status
resultat (om status = framgång), här får du följande värden:
messageid
sms
deliverydate
deliverystatus
fel (om status = fel), här får du felkoden
De möjliga felkoderna listas nedan
Koda Beskrivning
MISSING_USER Användare eller e-post ingår inte
MISSING_PASSWORD Lösenord ingår inte
MISSING_CLASS Klass ingår inte
MISSING_METHOD Metod ingår inte
MISSING_COMPULSORY_PARAM Obligatorisk parameter ingår inte
INCORRECT_USER_PASSWORD Felaktig användare eller lösenord
INCORRECT_CLASS Fel klass
INCORRECT_METHOD Fel metod
parametrar:
class: sms
method: getdeliverystatus
user: user
password: password
messages: 123456,123457,123458
output:
Begäran:
https://www.afilnet.com/api/http/?class=sms&method=getdeliverystatus&user=user&password=password&messages=123456,123457,123458&output=
|
April 2020
This tutorial is the first one of a series dedicated to django admin interface customization. Today, we will learn how to replace the default django admin login page with a new shiny one.
You can find the source code in this repository
First thing we need to do is to modify our admin.py file to indicate Django that we will use our own AdminSite with our own template.
from django.contrib.admin import AdminSite
class MyAdminSite(AdminSite):
login_template = 'backoffice/templates/admin/login.html'
...
# Then register your models with the new admin site
site = MyAdminSite()
# Register your models here.
site.register(Shop,ShopAdmin)
site.register(Product,ProductAdmin)
Next we will modify our project urls.py file to explicit set our new admin site
from django.conf.urls import url
from django.contrib import admin
from backoffice.admin import site
admin.site = site
admin.autodiscover()
urlpatterns = [
url(r'^admin/', admin.site.urls),
]
And now we modify our settings.py file to tells Django in which directory our custom templates can be found
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
BASE_DIR,
os.path.join(BASE_DIR, 'templates')
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
We can now create our custom login.html page in our directory templates/admin/backoffice
We need to copy the default Django admin login page inside our directory
cp ../../../environnements/genericenv/lib/python3.7/site-packages/django/contrib/admin/templates/admin/login.html backoffice/templates/admin/
And start to modify it’s content to add our new design code
{% block extrastyle %}{{ block.super }}
<linkrel="stylesheet"type="text/css"href="{% static 'admin/css/login.css'%}">
<style>
body.login{
background-image: linear-gradient(120deg,#3498db,#8e44ad);
}
#header {
width: auto;
height: auto;
display: flex;
justify-content: space-between;
align-items: center;
padding: 10px 40px;
background-image: linear-gradient(120deg,#3498db,#8e44ad);
color: #ffc;
overflow: hidden;
}
.login .submit-row {
clear: both;
padding: 1em 0 0 9.4em;
margin: 0;
border: none;
background: none;
text-align: left;
background-image: linear-gradient(120deg,#3498db,#8e44ad);
}
.button, input[type="submit"], input[type="button"], .submit-row input, a.button {
background: transparent;
padding: 10px 15px;
border: none;
border-radius: 4px;
color: #fff;
cursor: pointer;
}
</style>
{{ form.media }}
{% endblock %}
We are modifying the CSS using a custom style tag in our HTML. We could have also customized with a brand new css file or even write a full new HTML file instead of overring the Django admin default login.html.
Possibilities are multiple and it’s up to you to choose what you like to do.
|
from bs4 import BeautifulSoup
import requests
import re
import datetime
metal_translation = {"Aluminium": "Aluminio", "Copper": "Cobre", "Zinc": "Zinc", "Nickel": "NÃquel", "Lead": "Plomo", "Tin": "Estaño",
"Aluminium Alloy": "Aleación de Aluminio", "Cobalt": "Cobalto", "Gold*": "Oro*", "Silver*": "Plata*",
"Steel Scrap**": "Chatarra de Acero", "NASAAC": "NASAAC", "Steel Rebar**": "Varilla de Acero"}
def get_metal_values():
names = []
prices = []
metals = requests.get('https://www.lme.com/').text
soup = BeautifulSoup(metals, 'lxml')
metal_table = soup.find("table", attrs={"class": "ring-times"})
metal_table_names, metal_table_prices = metal_table.tbody.find_all("th"), metal_table.tbody.find_all("td")
for name in metal_table_names:
names.append(name.text.replace("LME ", ""))
for price in metal_table_prices:
prices.append(price.text.strip())
return names, prices
def get_peso_conversion():
peso = requests.get('https://themoneyconverter.com/USD/MXN').text
soup1 = BeautifulSoup(peso, 'lxml')
conversion = soup1.find("div", class_="cc-result").text
rate = re.search("\d{2}\.\d{4}", conversion).group()
return rate
def get_time():
date = datetime.datetime.now()
time = (f'{date.day}/{date.month}/{date.year} | {date.hour}:{date.minute}')
return time
def convert_values():
names, prices = get_metal_values()
rate = get_peso_conversion()
metal_data = dict(zip(names, prices))
for k, v in metal_data.items():
v = (float(v.replace(",", "")) * float(rate))
v = ("%.2f" % v)
k = metal_translation[k]
print(f'{k}: {v} $')
def program_run():
print("Metal Prices by ETHAN HETRICK")
print("================================================")
print(f'{get_time()} | 1 USD = {get_peso_conversion()} MXN\n')
print("Precios de metales de London Metal Exchange: (Por tonelada métrica, *Por onza Troy)\n")
convert_values()
print("================================================")
program_run()
input("\nEscribe 'x' para terminar.\n")
EXAMPLE OUTPUT:
Metal Prices by ETHAN HETRICK
================================================
26/6/2020 | 18:28 | 1 USD = 23.0622 MXN
Precios de metales de London Metal Exchange: (Por tonelada métrica, *Por onza Troy)
Aluminio: 36484.40 $
Cobre: 138038.80 $
Zinc: 47438.95 $
NÃquel: 293097.50 $
Plomo: 41004.59 $
Estaño: 391826.78 $
Aleación de Aluminio: 27997.51 $
NASAAC: 27444.02 $
Cobalto: 657272.70 $
Oro*: 40711.70 $
Plata*: 410.74 $
Chatarra de Acero: 6065.36 $
Varilla de Acero: 9720.72 $
================================================
Escribe 'x' para terminar.
My fiances father owns a metal recycling business in Mexico City, Mexico which requires him to do a lot of calculations when negotiating with clients. Since he was doing this by hand, I decided to automate this for him. I used London Metal Exchange, which is his preferred site to get the current prices, for the metal prices and also got the exchange rate for USD to MXN and applied that as well. I also needed to translate everything to spanish so I used a dictionary. This is my first time utilizing webscraping and the datetime module so any advice to make this program run more efficiently or make the code more concise is much appreciated!
|
Description
The aqObject.CompareProperty method allows you to perform a property checkpoint from script code, that is, to verify an object’s property value according to a certain condition. If the verification succeeds, the method posts a success message to the test log; otherwise it posts a failure message.
The method can test properties that have simple value types (string, number, boolean and so on) and does not support properties that contain complex values such as arrays, objects and alike. If test properties have different value types, TestComplete will try to convert the object property value to the type of the expected value and then perform verification.
Note: If an object has a property or method that returns a DISPID_VALUE (for information on this, see DISPID Constants in the MSDN Library), the object is considered to have a value of a simple type and can be used for property validation. For example, a JavaScript array of simple values is an object with a string value (comma-separated list of items) and a .NET string is an object with a property returning the string contents.
Note that instead of using the aqObject.CompareProperty method, you can compare property values with expected values using the appropriate scripting language operators: =, >, >=, <, <= and so on. Using the aqObject.CompareProperty method makes sense if you need to perform complex string comparisons like "contains", "starts with" and so on, with or without letter case taken into account.
Declaration
aqObject.CompareProperty(Property, Condition, Value, CaseSensitive, MessageType)
Property [in] Required Variant
Condition [in] Required Integer
Value [in] Required Variant
CaseSensitive [in] Optional Boolean Default value: True
MessageType [in] Optional Integer Default value: lmWarning
Result Boolean
Applies To
The method is applied to the following object:
Parameters
The method has the following parameters:
Property
The property value to be checked. For example, Sys.Clipboard. This value must be of a simple (non-object) type: string, number, boolean and so on.
Condition
One of the following constants that specifies the property value test condition:
Constant Value Description
cmpEqual 0 Check whether the Property value equals to Value.
cmpNotEqual 1 Check whether the Property value is not equal to Value.
cmpGreater 2 Check whether the Property value is greater than Value.
cmpLess 3 Check whether the Property value is less than Value.
cmpGreaterOrEqual 4 Check whether the Property value is greater or equal to Value.
cmpLessOrEqual 5 Check whether the Property value is less or equal to Value.
cmpContains 6 Check whether the Property value contains Value.
cmpNotContains 7 Check whether the Property value does not contain Value.
cmpStartsWith 8 Check whether the Property value starts with Value.
cmpNotStartsWith 9 Check whether the Property value does not start with Value.
cmpEndsWith 10 Check whether the Property value ends with Value.
cmpNotEndsWith 11 Check whether the Property value does not end with Value.
cmpMatches 12 Check whether the Property value matches the regular expression specified by Value.
cmpNotMatches 13 Check whether the Property value does not match the regular expression specified by Value.
cmpIn 14 Check if Value contains the Property value. (Similar to cmpContains, but the other way round.)
cmpNotIn 15 Check if Value does not contain the Property value. (Similar to cmpNotContains, but the other way round.)
When testing a property value of a string type, you can use any of these conditions. For more information about string comparison rules, see the Remarks section.
When testing a numeric property value, you can use any of the following conditions: cmpEqual, cmpNotEqual, cmpGreater, cmpLess, cmpGreaterOrEqual, cmpLessOrEqual.
When testing a property that has a boolean value, you can only use the cmpEqual or cmpNotEqual condition.
Value
Specifies the value to test the property value against. The meaning of this parameter depends on the Condition parameter (see the table above).
CaseSensitive
If Property and Value are strings, this parameter specifies whether the should perform case-sensitive or case-insensitive comparison; otherwise, this parameter is ignored. By default, this parameter is True, which means case-sensitive comparison; False means that the letter case is ignored.
MessageType
If the property verification fails, the posts a message to the test log. This parameter specifies the message type. It can be one of the following constants:
Constant Value Description
lmNone 0 Do not post any message.
lmMessage 1 Post an informative message.
lmWarning 2 Post a warning message.
lmError 3 Post an error message.
Result Value
True if the property value matches the specified condition and False otherwise.
Remarks
String comparisons ("equals to", "greater than", "less than" and similar) use character codes and are not affected by the locale. For example, "b" is greater than "a", "c" is greater than "b" and so on. If the CaseSensitive parameter is True, letter case is taken into account ("a" is greater than "A"), otherwise it is ignored ("a" is equal to "A"). The comparison is performed symbol-by-symbol and finishes once a difference is found or when both strings have been compared to the end. If two strings having different lengths compare as equal to the end of one string, the longer string is considered as the greater one. For instance, "abcd" is greater than "ab".
To verify an object property's value, you can also use the
CheckPropertymethod.
You can use the
cmpIn,cmpNotIn,cmpContainsandcmpNoteContainsconditions to test a value against comma-separated, pipe-separated lists, and so on, that is, to check whether the value equals (or does not equal) to any value stored on the list. For example, you can use thecmpIncondition to check whether the value of theSys.OSInfo.Nameproperty that contains the name of the currently running operating system belongs to the comma-separated list "Win7,WinVista,Win2008":
JavaScript, JScript
aqObject.CompareProperty(Sys.OSInfo.Name, cmpIn, "Win7,WinVista,Win2008");
Python
aqObject.CompareProperty(Sys.OSInfo.Name, cmpIn, "Win7,WinVista,Win2008")
VBScript
CallaqObject.CompareProperty(Sys.OSInfo.Name, cmpIn, "Win7,WinVista,Win2008")
DelphiScript
aqObject.CompareProperty(Sys.OSInfo.Name, cmpIn, 'Win7,WinVista,Win2008');
C++Script, C#Script
aqObject["CompareProperty"](Sys["OSInfo"]["Name"], cmpIn, "Win7,WinVista,Win2008");
Example
The following example demonstrates how you can perform various verifications of a string value:
JavaScript, JScript
function ComparePropertySample()
{
var str = "abracadabra";
// False - letter case is different
Log.Message( aqObject.CompareProperty(str, cmpEqual, "ABRACADABRA", true) );
// True - letter case is ignored
Log.Message( aqObject.CompareProperty(str, cmpEqual, "ABRACADABRA", false) );
// True
Log.Message( aqObject.CompareProperty(str, cmpContains, "a") );
// False - letter case is different
Log.Message( aqObject.CompareProperty(str, cmpStartsWith, "ABRA", true) );
// True - letter case is ignored
Log.Message( aqObject.CompareProperty(str, cmpEndsWith, "CADABRA", false) );
}
Python
def ComparePropertySample():
str = "abracadabra"
# False - letter case is different
Log.Message(aqObject.CompareProperty(str, cmpEqual, "ABRACADABRA", True))
# True - letter case is ignored
Log.Message(aqObject.CompareProperty(str, cmpEqual, "ABRACADABRA", False))
# True
Log.Message(aqObject.CompareProperty(str, cmpContains, "a"))
# False - letter case is different
Log.Message(aqObject.CompareProperty(str, cmpStartsWith, "ABRA", True))
# True - letter case is ignored
Log.Message(aqObject.CompareProperty(str, cmpEndsWith, "CADABRA", False))
VBScript
Sub ComparePropertySample
Dim str
str = "abracadabra"
' False - letter case is different
Log.Message aqObject.CompareProperty(str, cmpEqual, "ABRACADABRA", True)
' True - letter case is ignored
Log.Message aqObject.CompareProperty(str, cmpEqual, "ABRACADABRA", False)
' True
Log.Message aqObject.CompareProperty(str, cmpContains, "a")
' False - letter case is different
Log.Message aqObject.CompareProperty(str, cmpStartsWith, "ABRA", True)
' True - letter case is ignored
Log.Message aqObject.CompareProperty(str, cmpEndsWith, "CADABRA", False)End Sub
DelphiScript
procedure ComparePropertySample;var str;begin
str := 'abracadabra';
// False - letter case is different
Log.Message( aqObject.CompareProperty(str, cmpEqual, 'ABRACADABRA', true) );
// True - letter case is ignored
Log.Message( aqObject.CompareProperty(str, cmpEqual, 'ABRACADABRA', false) );
// True
Log.Message( aqObject.CompareProperty(str, cmpContains, 'a') );
// False - letter case is different
Log.Message( aqObject.CompareProperty(str, cmpStartsWith, 'ABRA', true) );
// True - letter case is ignored
Log.Message( aqObject.CompareProperty(str, cmpEndsWith, 'CADABRA', false) );end;
C++Script, C#Script
function ComparePropertySample()
{
var str = "abracadabra";
// False - letter case is different
Log["Message"]( aqObject["CompareProperty"](str, cmpEqual, "ABRACADABRA", true) );
// True - letter case is ignored
Log["Message"]( aqObject["CompareProperty"](str, cmpEqual, "ABRACADABRA", false) );
// True
Log["Message"]( aqObject["CompareProperty"](str, cmpContains, "a") );
// False - letter case is different
Log["Message"]( aqObject["CompareProperty"](str, cmpStartsWith, "ABRA", true) );
// True - letter case is ignored
Log["Message"]( aqObject["CompareProperty"](str, cmpEndsWith, "CADABRA", false) );
}
|
Доступно с лицензией Spatial Analyst.
Сводка
Комбинирует несколько растров, так чтобы уникальное выходное значение присваивается каждой уникальной комбинации входных значений.
Иллюстрация
Использование
Инструмент Комбинировать работает с целочисленными значениями и связанными с ними таблицами атрибутов. Если значения на входном растре представлены числами с плавающей точкой, они будут автоматически округлены, протестированы на уникальность по отношению к другим входным данным и записаны в выходную таблицу атрибутов.
Инструмент Комбинировать аналогичен инструменту Комбинаторный оператор Or. Оба инструмента присваивают новое число уникальной комбинации входных значений.
В качестве входных данных для инструмента Комбинировать можно использовать не более 20 растров.
Если многоканальный растр указан как один из входных для параметра Входные растры (in_rasters в Python), будут обработаны все каналы.
Чтобы обработать выбранные каналы многоканального растра, сначала создайте новый набор растровых данных, состоящий только из необходимых каналов, с помощью инструмента Объединить каналы, затем укажите полученный набор как Входной растр (in_rasters в Python).
Если значение ячейки на любом из входных растров – NoData, местоположению этой ячейки на выходном растре будет также присвоено значение NoData.
Выходной растр всегда будет целочисленным.
Для форматов отличных от Esri Grid выходной растр данного инструмента может по умолчанию иметь максимально 65,536 уникальных значений.
Вы можете увеличить данное число изменением настроек ArcGIS. В главном меню выберите Настройка > Опции ArcMap. В диалоговом окне Опции ArcMap щёлкните закладку Растр и измените значение поля Максимальное число уникальных значений для отображения на соответствующее.
См. раздел Среда анализа и Spatial Analyst для получения дополнительной информации о среде геообработки данного инструмента.
Синтаксис
Combine(in_rasters)
Parameter Объяснение Тип данных
in_rasters
[in_raster,...]
Входные растры, которые будут скомбинированы.
Raster Layer
Значение отраженного сигнала
Name Объяснение Тип данных
out_raster
Выходной комбинированный растр.
Каждому уникальному сочетанию входных значений было присвоено уникальное целочисленное значение.
Raster
Пример кода
Combine, пример 1 (окно Python)
В этом примере инструмент берет входные растры различных форматов (грид, IMG и TIFF) и выдает значения уникальной комбинации в виде растра грида.
import arcpy from arcpy import env from arcpy.sa import * env.workspace = "C:/sapyexamples/data" outCombine = Combine(["filter", "zone", "source.img", "dec.tif"]) outCombine.save("C:/sapyexamples/output/outcombine2")
Combine, пример 2 (автономный скрипт)
В этом примере инструмент берет входные растры различных форматов (грид, IMG и TIFF) и выдает значения уникальной комбинации в виде растра грида.
# Name: Combine_Ex_02.py # Description: Combines multiple rasters such that a unique value is # assigned to each unique combination of input values # Requirements: Spatial Analyst Extension # Import system modules import arcpy from arcpy import env from arcpy.sa import * # Set environment settings env.workspace = "C:/sapyexamples/data" # Set local variables inRaster01 = "filter" inRaster02 = "zone" inRaster03 = "source.img" inRaster04 = "dec.tif" # Check out the ArcGIS Spatial Analyst extension license arcpy.CheckOutExtension("Spatial") # Execute Combine outCombine = Combine([inRaster01,inRaster02,inRaster03,inRaster04]) # Save the output outCombine.save("C:/sapyexamples/output/outcombine")
Environments
Информация о лицензиях
Basic: Требуется Spatial Analyst
Standard: Требуется Spatial Analyst
Advanced: Требуется Spatial Analyst
|
Black Mamba - open quickly and other shortcuts
Hi all,
I mainly work with Pythonista on the biggest iPad with external keyboard. And I miss some features like Open quickly, registering shortcuts for my scripts, ... so I decided to add them by myself. Hope it will be natively supported in the future.
Here's screen recording of what I can do with external keyboard only. If you would like to try it, feel free to clone
allfiles from pythonista-site-packages-3 repository to your localsite-packages-3folder. Just download them or usegitvia StaSH.
Then you can hit ...
Cmd /- to toggle comments
Cmd N- to open new tab and to show new file dialog
Cmd Shift N- to open just new tab
Cmd 0(zero) - to toggle library view (I call it navigator)
Cmd W- to close current tab
Cmd Shift W- to close all tabs except current one
Cmd O(letter o) - to quickly open files
If you need more shortcuts for more actions, just let me know and I'll try to add them.
WARNINGIt works, but it'sexperimental,dangerous. There's some swizzling, some direct calls to ObjC instances, passing guessed parameter values, etc. It can crash your Pythonista, you can lost data, ... I warned you :) If you will modify this code and Pythonista will crash during startup, just openpythonista3://in your browser and fix your changes.
Will write more issues, so, Pythonista modules can be enhanced (like
editormodule functions to close tab, open file, etc.). Then I can remove some of these crazy calls.
Back to open quickly ... It just supports Python (
.py) and Markdown (.md) files. That's because I have no clueyetwhat should I pass as editor type, when force reload should be used, ... Also some directories are excluded, so, you know why if some files are missing in open quickly dialog. Also the open quickly dialog has hidden title bar -> no close button -> you need to hit Esc (or Ctrl [ on smart keyboard) to close it.
Anyway, it's pretty scary, fascinating, ... that we can do all these things directly on iPad. Many thanks to Ole for this wonderful tool. This tool let me play with Python on iPad and I can drive our AWS services directly from iPad as well. I'm not forced to bring MBP everywhere, iPad is enough :) Thanks again.
Enjoy!
zrzka
Ouch, did forget to commit
key_events.py. Fixed, everything's in place. Sorry.
I did move these
hacksinto theblackmamba(Pythonista on steroids) package, so, it will not interfere with another packages you already have in thesite-packages-3folder.
I'm dumb :) Did spend lot of time in the runtime searching for correct function to call that I completely missed that
editor.open_filealready exists. Open Quickly is no longer limited to Python & Markdown files only.
Also I just added
Cmd-Shift-0(zero) to open Dash with a) selected text as a query, b) if there's no selection, cursor position is used to find identifier around it.
@zrzka said:
Also I just added Cmd-Shift-0 (zero) to open Dash with a) selected text as a query, b) if there's no selection, cursor position is used to find identifier around it.
Very nice idea!
@zrzka, thanks. This is looking pretty cool. Is there a strategy that i can clone the repo into the site-package-3 dir. i cloned it then moved it, i was a little unsure of what i was doing. Maybe this cant be done because of site-packages-3 being a special folder. Not sure. But maybe I should just clone the blackmamba dir and deal with the startup seperately. I just want to be able to update as you update by doing a git pull, rather than moving etc...
I have one suggestion, is to do print statements in your startup file. Just a few to say what's going on. I do that for my simple startup for changing the screen brightness, font size, turning off animations etc...Anyway, just a suggestion. I inserted a print line just saying 'doing some blackmamba stuff'.
You can clone it directly into the
site-packages-3folder in this way ...
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
.git (352.0B) 2017-08-12 10:15:40
LICENSE (1.2K) 2017-08-12 10:15:40
Readme.md (2.2K) 2017-08-12 10:15:40
blackmamba (384.0B) 2017-08-12 10:15:40
external_screen.py (5.6K) 2017-08-12 10:15:40
pythonista_startup.py (1.3K) 2017-08-12 10:15:40
[site-packages-3]$ rm -rf .git LICENSE Readme.md blackmamba external_screen.py pythonista_startup.py
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
.git (352.0B) 2017-08-12 10:16:16
LICENSE (1.2K) 2017-08-12 10:16:15
Readme.md (2.2K) 2017-08-12 10:16:15
blackmamba (384.0B) 2017-08-12 10:16:15
external_screen.py (5.6K) 2017-08-12 10:16:15
pythonista_startup.py (1.3K) 2017-08-12 10:16:16
[site-packages-3]$
And then you can update it via
git pull...
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ git pull
[site-packages-3]$
Added some comments as requested and I also created
blackmamba.startupfile where all the startup things are done. Then I removedpythonista_startup.pyfrom GitHub repo, so, it will not interfere with yourpythonista_startup.pyfile and did add Installation section to the readme. Just copy & paste the lines from this section to yourpythonista_startup.pyfile. This will be stable interface for BM initialization. Alsopythonista_startup.pyfile is ignored by git (.gitignore), feel free to modify it, add your custom routines,git pullwill be happy.
Yeah, I know it's akward installation process, but because I still consider it as
hack, not going to provide better way to install / update it. And I hope that some of these features will be added to the Pythonista, thus lot of things will be removed, new ones added, ...
@zrzka, sorry. I cant see what I am doing wrong. I have done this a few times so all your cmds are not there. But I think you can see from the printout below, the site-packages-3 dir is empty before I do the clone. Bu you can see I only end up with a single folder in the site-packages-3 dir after the clone.
I cant see what I am doing wrong
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
pythonista-site-packages-3 (256.0B) 2017-08-12 15:39:32
[site-packages-3]$
I had probably older StaSH. Did update StaSH and you're right, now
git clonebehaves correctly. So the clone command is:
git clone https://github.com/zrzka/pythonista-site-packages-3.git .(<- space dot at the end)
Phuket2
@zrzka , perfect, works great. Thanks for your help. You might want to make a note in your installation instructions about the trailing period. I can see you changed it, but this would trip up newbies like me.
Done :) More updates later, going to take week off :)
Phuket2
@zrzka , ok. Enjoy. Thanks again.
wolf71
cool,perfect,thanks.
and wish @omz next pythonista update can support it native.
@zrzka, hey. I know you are off right now, hope you are having a good break. But when you return could you consider having a (dedicated) key combo (cmd-something) that you can apply to invoke a given wrench menu item. Eg. Maybe the user could pass the name of the wrench item name as a param in the bm.start('StaSh'). I guess it could also be a .py filename. Although that seems a bit more on the wild side. Maybe you have some better ideas than that.
I also mention dedicated, because I think it would get infinitely more complicated to let users to map their own keys.
Anyway thanks again, still only a day or so with what you have done so far and its very useful for me with the Apple iPad Pro keyboard.
@Phuket2 thanks for the suggestions.
Open Quickly
I would like to reuse Open Quickly dialog (when I refactor it) for:
Run Quickly (search for just .pyand run it, basically emulation of open and tapping on play),
Wrench Quickly, same idea, but you can search for wrench items.
UI for mapping keys
I'm not going to provide UI for mapping keys, because it's a lot of work, which can be replaced with something more simpler. I can remove HW shortcuts registration from
bm.start, can provide something likebm.register_default_key_commands. And if you don't call this function in yourpythonista_startup.pyfile, feel free to map your own shortcuts viabm.register_key_command. Or call it and add your own afterbm.start().Shortcut for the wrench item
Just do this in your
pythonista_startup.pyfile:
#!python3
import blackmamba.startup as bm
import blackmamba.key_commands as bkc
import blackmamba.uikit as bui
def launch_wrench_item(name):
print('Wrench item: {}'.format(name))
def launch_stash():
launch_wrench_item('StaSh')
bm.start()
bkc.register_key_command(
bkc.PYTHONISTA_SCOPE_EDITOR,
'S',
bui.UIKeyModifierCommand | bui.UIKeyModifierShift,
launch_stash,
'Launch StaSh')
This maps
Cmd-Shift-Stolaunch_stash, where you can do whatever you want :)
Run Quickly (search for just
Some breaking changes pushed. Check Usage section in the readme. Summary:
external_screen.pymoved toblackmamba/experimental
blackmamba/startup.pytrashed
register_default_key_commandsintroduced inblackmamba/__init__.py
removed scope in blackmamba/key_commands.py
usage examples updated
repository renamed (pythonista-site-packages-3->blackmamba)
If you don't want to check all these changes, just update your
pythonista_startup.pycontent:
#!python3
import blackmamba as bm
bm.register_default_key_commands()
@zrzka , hey thanks. The git pull worked perfectly. Thanks for your help to get it setup correctly. Makes a huge difference being able to do that. I haven't added my own keys yet, but will put some thought into it. I am always running 'Check Style' and 'Reformat Code' these days so I am guessing, i just need to find these scripts and run them from function stubs like you do with the hallo example. Anyway, will give it a go later.
Thanks again. This is really fantastic with an external keyboard. I am sure a lot of other apps would be envious of this ability.
Oppps, sorry, I missed the post above this...Looks like the wrench keys have been handled. That's great. I will try them now!!!!
Phuket2
@Phuket2 wrench item(s) are not handled yet. It's just silly example how to print smth with keyboard shortcut. I'll try to add run script / wrench item today. Then you'll be able to use it.
@zrzka , ok. Cool. I had a lame attempt to get it working and started going down a rabbit hole. But for me it will be a big help. Esp the check styles/reformat code. @ccc has beaten me with a big stick so I don't dare to push anything anymore until I have done all the checks :) I find it super annoying and time consuming. But I am happy I am staring to take the time to do things properly. Just a matter of time it will become second nature.
Ok, will keep a look out for the next update :)
@Phuket2 I did refactor my picker, thus I was able to add Run Quickly... (Cmd Shift R) and Wrench Quickly... (Cmd Option R).
Butit does work only and only if scripts are Python 3 compatible. Otherwise you can run them, but they will fail to execute. See another thread for more info. Sorry for this, will try to solve it somehow.
It's useless for StaSh (Python 2) and maybe for many more scripts.
|
Black Mamba - open quickly and other shortcuts
Hi all,
I mainly work with Pythonista on the biggest iPad with external keyboard. And I miss some features like Open quickly, registering shortcuts for my scripts, ... so I decided to add them by myself. Hope it will be natively supported in the future.
Here's screen recording of what I can do with external keyboard only. If you would like to try it, feel free to clone
allfiles from pythonista-site-packages-3 repository to your localsite-packages-3folder. Just download them or usegitvia StaSH.
Then you can hit ...
Cmd /- to toggle comments
Cmd N- to open new tab and to show new file dialog
Cmd Shift N- to open just new tab
Cmd 0(zero) - to toggle library view (I call it navigator)
Cmd W- to close current tab
Cmd Shift W- to close all tabs except current one
Cmd O(letter o) - to quickly open files
If you need more shortcuts for more actions, just let me know and I'll try to add them.
WARNINGIt works, but it'sexperimental,dangerous. There's some swizzling, some direct calls to ObjC instances, passing guessed parameter values, etc. It can crash your Pythonista, you can lost data, ... I warned you :) If you will modify this code and Pythonista will crash during startup, just openpythonista3://in your browser and fix your changes.
Will write more issues, so, Pythonista modules can be enhanced (like
editormodule functions to close tab, open file, etc.). Then I can remove some of these crazy calls.
Back to open quickly ... It just supports Python (
.py) and Markdown (.md) files. That's because I have no clueyetwhat should I pass as editor type, when force reload should be used, ... Also some directories are excluded, so, you know why if some files are missing in open quickly dialog. Also the open quickly dialog has hidden title bar -> no close button -> you need to hit Esc (or Ctrl [ on smart keyboard) to close it.
Anyway, it's pretty scary, fascinating, ... that we can do all these things directly on iPad. Many thanks to Ole for this wonderful tool. This tool let me play with Python on iPad and I can drive our AWS services directly from iPad as well. I'm not forced to bring MBP everywhere, iPad is enough :) Thanks again.
Enjoy!
zrzka
I did move these
hacksinto theblackmamba(Pythonista on steroids) package, so, it will not interfere with another packages you already have in thesite-packages-3folder.
I'm dumb :) Did spend lot of time in the runtime searching for correct function to call that I completely missed that
editor.open_filealready exists. Open Quickly is no longer limited to Python & Markdown files only.
Also I just added
Cmd-Shift-0(zero) to open Dash with a) selected text as a query, b) if there's no selection, cursor position is used to find identifier around it.
@zrzka said:
Also I just added Cmd-Shift-0 (zero) to open Dash with a) selected text as a query, b) if there's no selection, cursor position is used to find identifier around it.
Very nice idea!
@zrzka, thanks. This is looking pretty cool. Is there a strategy that i can clone the repo into the site-package-3 dir. i cloned it then moved it, i was a little unsure of what i was doing. Maybe this cant be done because of site-packages-3 being a special folder. Not sure. But maybe I should just clone the blackmamba dir and deal with the startup seperately. I just want to be able to update as you update by doing a git pull, rather than moving etc...
I have one suggestion, is to do print statements in your startup file. Just a few to say what's going on. I do that for my simple startup for changing the screen brightness, font size, turning off animations etc...Anyway, just a suggestion. I inserted a print line just saying 'doing some blackmamba stuff'.
You can clone it directly into the
site-packages-3folder in this way ...
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
.git (352.0B) 2017-08-12 10:15:40
LICENSE (1.2K) 2017-08-12 10:15:40
Readme.md (2.2K) 2017-08-12 10:15:40
blackmamba (384.0B) 2017-08-12 10:15:40
external_screen.py (5.6K) 2017-08-12 10:15:40
pythonista_startup.py (1.3K) 2017-08-12 10:15:40
[site-packages-3]$ rm -rf .git LICENSE Readme.md blackmamba external_screen.py pythonista_startup.py
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
.git (352.0B) 2017-08-12 10:16:16
LICENSE (1.2K) 2017-08-12 10:16:15
Readme.md (2.2K) 2017-08-12 10:16:15
blackmamba (384.0B) 2017-08-12 10:16:15
external_screen.py (5.6K) 2017-08-12 10:16:15
pythonista_startup.py (1.3K) 2017-08-12 10:16:16
[site-packages-3]$
And then you can update it via
git pull...
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ git pull
[site-packages-3]$
Added some comments as requested and I also created
blackmamba.startupfile where all the startup things are done. Then I removedpythonista_startup.pyfrom GitHub repo, so, it will not interfere with yourpythonista_startup.pyfile and did add Installation section to the readme. Just copy & paste the lines from this section to yourpythonista_startup.pyfile. This will be stable interface for BM initialization. Alsopythonista_startup.pyfile is ignored by git (.gitignore), feel free to modify it, add your custom routines,git pullwill be happy.
Yeah, I know it's akward installation process, but because I still consider it as
hack, not going to provide better way to install / update it. And I hope that some of these features will be added to the Pythonista, thus lot of things will be removed, new ones added, ...
@zrzka, sorry. I cant see what I am doing wrong. I have done this a few times so all your cmds are not there. But I think you can see from the printout below, the site-packages-3 dir is empty before I do the clone. Bu you can see I only end up with a single folder in the site-packages-3 dir after the clone.
I cant see what I am doing wrong
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
pythonista-site-packages-3 (256.0B) 2017-08-12 15:39:32
[site-packages-3]$
I had probably older StaSH. Did update StaSH and you're right, now
git clonebehaves correctly. So the clone command is:
git clone https://github.com/zrzka/pythonista-site-packages-3.git .(<- space dot at the end)
Phuket2
@zrzka , perfect, works great. Thanks for your help. You might want to make a note in your installation instructions about the trailing period. I can see you changed it, but this would trip up newbies like me.
Done :) More updates later, going to take week off :)
Phuket2
@zrzka , ok. Enjoy. Thanks again.
wolf71
cool,perfect,thanks.
and wish @omz next pythonista update can support it native.
@zrzka, hey. I know you are off right now, hope you are having a good break. But when you return could you consider having a (dedicated) key combo (cmd-something) that you can apply to invoke a given wrench menu item. Eg. Maybe the user could pass the name of the wrench item name as a param in the bm.start('StaSh'). I guess it could also be a .py filename. Although that seems a bit more on the wild side. Maybe you have some better ideas than that.
I also mention dedicated, because I think it would get infinitely more complicated to let users to map their own keys.
Anyway thanks again, still only a day or so with what you have done so far and its very useful for me with the Apple iPad Pro keyboard.
@Phuket2 thanks for the suggestions.
Open Quickly
I would like to reuse Open Quickly dialog (when I refactor it) for:
Run Quickly (search for just .pyand run it, basically emulation of open and tapping on play),
Wrench Quickly, same idea, but you can search for wrench items.
UI for mapping keys
I'm not going to provide UI for mapping keys, because it's a lot of work, which can be replaced with something more simpler. I can remove HW shortcuts registration from
bm.start, can provide something likebm.register_default_key_commands. And if you don't call this function in yourpythonista_startup.pyfile, feel free to map your own shortcuts viabm.register_key_command. Or call it and add your own afterbm.start().Shortcut for the wrench item
Just do this in your
pythonista_startup.pyfile:
#!python3
import blackmamba.startup as bm
import blackmamba.key_commands as bkc
import blackmamba.uikit as bui
def launch_wrench_item(name):
print('Wrench item: {}'.format(name))
def launch_stash():
launch_wrench_item('StaSh')
bm.start()
bkc.register_key_command(
bkc.PYTHONISTA_SCOPE_EDITOR,
'S',
bui.UIKeyModifierCommand | bui.UIKeyModifierShift,
launch_stash,
'Launch StaSh')
This maps
Cmd-Shift-Stolaunch_stash, where you can do whatever you want :)
Run Quickly (search for just
Some breaking changes pushed. Check Usage section in the readme. Summary:
external_screen.pymoved toblackmamba/experimental
blackmamba/startup.pytrashed
register_default_key_commandsintroduced inblackmamba/__init__.py
removed scope in blackmamba/key_commands.py
usage examples updated
repository renamed (pythonista-site-packages-3->blackmamba)
If you don't want to check all these changes, just update your
pythonista_startup.pycontent:
#!python3
import blackmamba as bm
bm.register_default_key_commands()
@zrzka , hey thanks. The git pull worked perfectly. Thanks for your help to get it setup correctly. Makes a huge difference being able to do that. I haven't added my own keys yet, but will put some thought into it. I am always running 'Check Style' and 'Reformat Code' these days so I am guessing, i just need to find these scripts and run them from function stubs like you do with the hallo example. Anyway, will give it a go later.
Thanks again. This is really fantastic with an external keyboard. I am sure a lot of other apps would be envious of this ability.
Oppps, sorry, I missed the post above this...Looks like the wrench keys have been handled. That's great. I will try them now!!!!
Phuket2
@Phuket2 wrench item(s) are not handled yet. It's just silly example how to print smth with keyboard shortcut. I'll try to add run script / wrench item today. Then you'll be able to use it.
@zrzka , ok. Cool. I had a lame attempt to get it working and started going down a rabbit hole. But for me it will be a big help. Esp the check styles/reformat code. @ccc has beaten me with a big stick so I don't dare to push anything anymore until I have done all the checks :) I find it super annoying and time consuming. But I am happy I am staring to take the time to do things properly. Just a matter of time it will become second nature.
Ok, will keep a look out for the next update :)
@Phuket2 I did refactor my picker, thus I was able to add Run Quickly... (Cmd Shift R) and Wrench Quickly... (Cmd Option R).
Butit does work only and only if scripts are Python 3 compatible. Otherwise you can run them, but they will fail to execute. See another thread for more info. Sorry for this, will try to solve it somehow.
It's useless for StaSh (Python 2) and maybe for many more scripts.
|
Win32APIでVOICEROIDをいじってみます
東北きりたん
VOICEROID+ 東北きりたん EXを買いました。
う~~~~んかわいい!!! かわいいです。
声もしっとりしていて完全にボク好みです。最高。
クラウドきりたん
いろいろ使いみちが思いつくんですが、Windowsでしか動かないのがネックです……
HTTPでテキストをぶん投げたら音声が飛んでくる感じになったら色々幸せじゃないですか。 ということで作っていきたいと思います。
Linuxで動かないかな?
Linuxで動かすとすれば、Wineですね。
Linux の Docker の中で voiceroid+ ゆかりさんを動かすという記事を見つけました。 どうやらWineで動くみたい?しかもDockerの中で。すごい!
試してみたんですが、うまくいきませんでした>< VOICEROID+ EXになってからいろいろ変わったんでしょうか。
自分でもWine環境を作って試してみたんですが、 .NET Framework 3.5のインストールがうまく行かず失敗。
ということでWineは諦めます。
WindowsServerで動かないかな?
動作環境には当然乗っていませんが、Windows Server 2016で適当に試してみたら普通に動きました。
ですが、VOICEROIDにはGUIしかありません。 CUIから操作できれば全て解決なんですが、用意されてません。かなしい。
ということで、Win32APIを叩いて自作プログラムからVOICEROIDの機能を使えるようにしてみましょう。 とはいっても、GUIを無理やり操作して動かすだけです。 筋肉ソリューション感が否めませんが、仕方がないです。
Win32APIを叩いてVOICEROIDを操作
このテの話は、「ウィンドウ 操作 Win32API」とかでググると無限に見つかるかと思うので、ザックリとだけ説明します。
SendMessage関数を使うとユーザのマウス操作やキーボード操作がエミュレートできるので、 うまい感じにテキストを入力させて保存ボタンを押させてあげれば、読み上げたwavファイルを得ることができそうです。
やりました
方針が定まったら書くだけ…… Pythonで書いてみました。
ffmpegを使っているので、別途用意が必要です。 必要なPythonのライブラリはpypiwin32です
pip install pypiwin32
コード
# coding: UTF-8
import os
import sys
import time
import hashlib
import threading
import subprocess
from win32con import *
from win32gui import *
from win32process import *
# 共通設定
waitSec = 0.5
windowName = "VOICEROID+ 東北きりたん EX"
def talk(inputText):
# 出力先ディレクトリ作成
outdir = "./output/"
try:
os.mkdir(outdir)
except:
pass
# ファイルが存在してたらやめる
outfile = outdir + hashlib.md5(inputText.encode("utf-8")).hexdigest() + ".mp3"
if os.path.exists(outfile):
return outfile
# 一時ファイルが存在している間は待つ
tmpfile = "tmp.wav"
while True:
if os.path.exists(outfile):
time.sleep(waitSec)
else:
break
while True:
# VOICEROIDプロセスを探す
window = FindWindow(None, windowName) or FindWindow(None, windowName + "*")
# 見つからなかったらVOICEROIDを起動
if window == 0:
subprocess.Popen(["C:\Program Files (x86)\AHS\VOICEROID+\KiritanEX\VOICEROID.exe"])
time.sleep(3 * waitSec)
else:
break
while True:
# ダイアログが出ていたら閉じる
errorDialog = FindWindow(None, "エラー") or FindWindow(None, "注意") or FindWindow(None, "音声ファイルの保存")
if errorDialog:
SendMessage(errorDialog, WM_CLOSE, 0, 0)
time.sleep(waitSec)
else:
break
# 最前列に持ってくる
SetWindowPos(window, HWND_TOPMOST, 0, 0, 0, 0, SWP_SHOWWINDOW | SWP_NOMOVE | SWP_NOSIZE)
# 保存ダイアログの操作
def enumDialogCallback(hwnd, param):
className = GetClassName(hwnd)
winText = GetWindowText(hwnd)
# ファイル名を設定
if className.count("Edit"):
SendMessage(hwnd, WM_SETTEXT, 0, tmpfile)
# 保存する
if winText.count("保存"):
SendMessage(hwnd, WM_LBUTTONDOWN, MK_LBUTTON, 0)
SendMessage(hwnd, WM_LBUTTONUP, 0, 0)
# 音声の保存
def save():
time.sleep(waitSec)
# ダイアログがあれば操作する
dialog = FindWindow(None, "音声ファイルの保存")
if dialog:
EnumChildWindows(dialog, enumDialogCallback, None)
return
# 再試行
save()
# VOICEROIDを操作
def enumCallback(hwnd, param):
className = GetClassName(hwnd)
winText = GetWindowText(hwnd)
# テキストを入力する
if className.count("RichEdit20W"):
SendMessage(hwnd, WM_SETTEXT, 0, inputText)
if winText.count("音声保存"):
# 最小化解除
ShowWindow(window, SW_SHOWNORMAL)
# 保存ダイアログ操作用スレッド起動
threading.Thread(target=save).start()
# 保存ボタンを押す
SendMessage(hwnd, WM_LBUTTONDOWN, MK_LBUTTON, 0)
SendMessage(hwnd, WM_LBUTTONUP, 0, 0)
# VOICEROIDにテキストを読ませる
EnumChildWindows(window, enumCallback, None)
# プログレスダイアログが表示されている間は待つ
while True:
if FindWindow(None, "音声保存"):
time.sleep(waitSec)
else:
break
# MP3に変換
subprocess.run(["ffmpeg", "-i", tmpfile, "-acodec", "libmp3lame", "-ab", "128k", "-ac", "2", "-ar", "44100", outfile])
# 一時ファイルが存在していたら消す
try:
os.remove(tmpfile)
os.remove(tmpfile.replace("wav", "txt"))
except:
pass
return outfile
print(talk(sys.argv[1]))
注意
一度適当なテキストを読み上げさせ、スクリプトを実行するディレクトリに保存させておく必要があります。 保存先ダイアログを操作するときに、保存先ディレクトリを変更せずに保存させるため、 スクリプトの実行ディレクトリと同じところがデフォルトになっていないと以後の処理が失敗します。
手抜きです……
ハマりそうなポイント
ところどころにsleepを入れないと操作が失敗することがある
フォーカスが当たってないとか最小化されてるとかでボタン操作に失敗することがある
出力が終わってない状況で新しい読み上げをさせようとすると死ぬ
今回は前のが終わるまでブロックするようにした
Windowsのバージョンが違うと保存ウィンドウが違う気がするので上手く行かないかも
今回はWindowsServer2016(Windows 10)です
同じテキストの繰り返しを投げるとVOICEROIDがエラーを吐く
よくわからん
次回予告
ということで、Pythonから好きなテキストをVOICEROIDに送って読み上げたWAVを得ることができるようになりました。 コレだけでもうだいぶ夢が広がるカンジですね!!
次回は、コイツをクラウドで動かしていつでもどこでもきりたんボイスが作れる環境を作ります。
|
Black Mamba - open quickly and other shortcuts
Hi all,
I mainly work with Pythonista on the biggest iPad with external keyboard. And I miss some features like Open quickly, registering shortcuts for my scripts, ... so I decided to add them by myself. Hope it will be natively supported in the future.
Here's screen recording of what I can do with external keyboard only. If you would like to try it, feel free to clone
allfiles from pythonista-site-packages-3 repository to your localsite-packages-3folder. Just download them or usegitvia StaSH.
Then you can hit ...
Cmd /- to toggle comments
Cmd N- to open new tab and to show new file dialog
Cmd Shift N- to open just new tab
Cmd 0(zero) - to toggle library view (I call it navigator)
Cmd W- to close current tab
Cmd Shift W- to close all tabs except current one
Cmd O(letter o) - to quickly open files
If you need more shortcuts for more actions, just let me know and I'll try to add them.
WARNINGIt works, but it'sexperimental,dangerous. There's some swizzling, some direct calls to ObjC instances, passing guessed parameter values, etc. It can crash your Pythonista, you can lost data, ... I warned you :) If you will modify this code and Pythonista will crash during startup, just openpythonista3://in your browser and fix your changes.
Will write more issues, so, Pythonista modules can be enhanced (like
editormodule functions to close tab, open file, etc.). Then I can remove some of these crazy calls.
Back to open quickly ... It just supports Python (
.py) and Markdown (.md) files. That's because I have no clueyetwhat should I pass as editor type, when force reload should be used, ... Also some directories are excluded, so, you know why if some files are missing in open quickly dialog. Also the open quickly dialog has hidden title bar -> no close button -> you need to hit Esc (or Ctrl [ on smart keyboard) to close it.
Anyway, it's pretty scary, fascinating, ... that we can do all these things directly on iPad. Many thanks to Ole for this wonderful tool. This tool let me play with Python on iPad and I can drive our AWS services directly from iPad as well. I'm not forced to bring MBP everywhere, iPad is enough :) Thanks again.
Enjoy!
zrzka
I'm dumb :) Did spend lot of time in the runtime searching for correct function to call that I completely missed that
editor.open_filealready exists. Open Quickly is no longer limited to Python & Markdown files only.
Also I just added
Cmd-Shift-0(zero) to open Dash with a) selected text as a query, b) if there's no selection, cursor position is used to find identifier around it.
@zrzka said:
Also I just added Cmd-Shift-0 (zero) to open Dash with a) selected text as a query, b) if there's no selection, cursor position is used to find identifier around it.
Very nice idea!
@zrzka, thanks. This is looking pretty cool. Is there a strategy that i can clone the repo into the site-package-3 dir. i cloned it then moved it, i was a little unsure of what i was doing. Maybe this cant be done because of site-packages-3 being a special folder. Not sure. But maybe I should just clone the blackmamba dir and deal with the startup seperately. I just want to be able to update as you update by doing a git pull, rather than moving etc...
I have one suggestion, is to do print statements in your startup file. Just a few to say what's going on. I do that for my simple startup for changing the screen brightness, font size, turning off animations etc...Anyway, just a suggestion. I inserted a print line just saying 'doing some blackmamba stuff'.
You can clone it directly into the
site-packages-3folder in this way ...
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
.git (352.0B) 2017-08-12 10:15:40
LICENSE (1.2K) 2017-08-12 10:15:40
Readme.md (2.2K) 2017-08-12 10:15:40
blackmamba (384.0B) 2017-08-12 10:15:40
external_screen.py (5.6K) 2017-08-12 10:15:40
pythonista_startup.py (1.3K) 2017-08-12 10:15:40
[site-packages-3]$ rm -rf .git LICENSE Readme.md blackmamba external_screen.py pythonista_startup.py
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
.git (352.0B) 2017-08-12 10:16:16
LICENSE (1.2K) 2017-08-12 10:16:15
Readme.md (2.2K) 2017-08-12 10:16:15
blackmamba (384.0B) 2017-08-12 10:16:15
external_screen.py (5.6K) 2017-08-12 10:16:15
pythonista_startup.py (1.3K) 2017-08-12 10:16:16
[site-packages-3]$
And then you can update it via
git pull...
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ git pull
[site-packages-3]$
Added some comments as requested and I also created
blackmamba.startupfile where all the startup things are done. Then I removedpythonista_startup.pyfrom GitHub repo, so, it will not interfere with yourpythonista_startup.pyfile and did add Installation section to the readme. Just copy & paste the lines from this section to yourpythonista_startup.pyfile. This will be stable interface for BM initialization. Alsopythonista_startup.pyfile is ignored by git (.gitignore), feel free to modify it, add your custom routines,git pullwill be happy.
Yeah, I know it's akward installation process, but because I still consider it as
hack, not going to provide better way to install / update it. And I hope that some of these features will be added to the Pythonista, thus lot of things will be removed, new ones added, ...
@zrzka, sorry. I cant see what I am doing wrong. I have done this a few times so all your cmds are not there. But I think you can see from the printout below, the site-packages-3 dir is empty before I do the clone. Bu you can see I only end up with a single folder in the site-packages-3 dir after the clone.
I cant see what I am doing wrong
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
pythonista-site-packages-3 (256.0B) 2017-08-12 15:39:32
[site-packages-3]$
I had probably older StaSH. Did update StaSH and you're right, now
git clonebehaves correctly. So the clone command is:
git clone https://github.com/zrzka/pythonista-site-packages-3.git .(<- space dot at the end)
Phuket2
@zrzka , perfect, works great. Thanks for your help. You might want to make a note in your installation instructions about the trailing period. I can see you changed it, but this would trip up newbies like me.
Done :) More updates later, going to take week off :)
Phuket2
@zrzka , ok. Enjoy. Thanks again.
wolf71
cool,perfect,thanks.
and wish @omz next pythonista update can support it native.
@zrzka, hey. I know you are off right now, hope you are having a good break. But when you return could you consider having a (dedicated) key combo (cmd-something) that you can apply to invoke a given wrench menu item. Eg. Maybe the user could pass the name of the wrench item name as a param in the bm.start('StaSh'). I guess it could also be a .py filename. Although that seems a bit more on the wild side. Maybe you have some better ideas than that.
I also mention dedicated, because I think it would get infinitely more complicated to let users to map their own keys.
Anyway thanks again, still only a day or so with what you have done so far and its very useful for me with the Apple iPad Pro keyboard.
@Phuket2 thanks for the suggestions.
Open Quickly
I would like to reuse Open Quickly dialog (when I refactor it) for:
Run Quickly (search for just .pyand run it, basically emulation of open and tapping on play),
Wrench Quickly, same idea, but you can search for wrench items.
UI for mapping keys
I'm not going to provide UI for mapping keys, because it's a lot of work, which can be replaced with something more simpler. I can remove HW shortcuts registration from
bm.start, can provide something likebm.register_default_key_commands. And if you don't call this function in yourpythonista_startup.pyfile, feel free to map your own shortcuts viabm.register_key_command. Or call it and add your own afterbm.start().Shortcut for the wrench item
Just do this in your
pythonista_startup.pyfile:
#!python3
import blackmamba.startup as bm
import blackmamba.key_commands as bkc
import blackmamba.uikit as bui
def launch_wrench_item(name):
print('Wrench item: {}'.format(name))
def launch_stash():
launch_wrench_item('StaSh')
bm.start()
bkc.register_key_command(
bkc.PYTHONISTA_SCOPE_EDITOR,
'S',
bui.UIKeyModifierCommand | bui.UIKeyModifierShift,
launch_stash,
'Launch StaSh')
This maps
Cmd-Shift-Stolaunch_stash, where you can do whatever you want :)
Run Quickly (search for just
Some breaking changes pushed. Check Usage section in the readme. Summary:
external_screen.pymoved toblackmamba/experimental
blackmamba/startup.pytrashed
register_default_key_commandsintroduced inblackmamba/__init__.py
removed scope in blackmamba/key_commands.py
usage examples updated
repository renamed (pythonista-site-packages-3->blackmamba)
If you don't want to check all these changes, just update your
pythonista_startup.pycontent:
#!python3
import blackmamba as bm
bm.register_default_key_commands()
@zrzka , hey thanks. The git pull worked perfectly. Thanks for your help to get it setup correctly. Makes a huge difference being able to do that. I haven't added my own keys yet, but will put some thought into it. I am always running 'Check Style' and 'Reformat Code' these days so I am guessing, i just need to find these scripts and run them from function stubs like you do with the hallo example. Anyway, will give it a go later.
Thanks again. This is really fantastic with an external keyboard. I am sure a lot of other apps would be envious of this ability.
Oppps, sorry, I missed the post above this...Looks like the wrench keys have been handled. That's great. I will try them now!!!!
Phuket2
@Phuket2 wrench item(s) are not handled yet. It's just silly example how to print smth with keyboard shortcut. I'll try to add run script / wrench item today. Then you'll be able to use it.
@zrzka , ok. Cool. I had a lame attempt to get it working and started going down a rabbit hole. But for me it will be a big help. Esp the check styles/reformat code. @ccc has beaten me with a big stick so I don't dare to push anything anymore until I have done all the checks :) I find it super annoying and time consuming. But I am happy I am staring to take the time to do things properly. Just a matter of time it will become second nature.
Ok, will keep a look out for the next update :)
@Phuket2 I did refactor my picker, thus I was able to add Run Quickly... (Cmd Shift R) and Wrench Quickly... (Cmd Option R).
Butit does work only and only if scripts are Python 3 compatible. Otherwise you can run them, but they will fail to execute. See another thread for more info. Sorry for this, will try to solve it somehow.
It's useless for StaSh (Python 2) and maybe for many more scripts.
Another update ...
wrench_pickerrenamed toaction_picker
Wrench Quickly...renamed toAction Quickly...with new shortcutCmd-Shift-A
ide.run_actionadded (see example below)
slight Action Quickly... UI improvements
title is custom title or just script name without extension if title is not provided
subtitle is script path
... and here's an example how to register custom shortcut to launch StaSh for example ...
#!python3
import blackmamba as bm
from blackmamba.key_commands import register_key_command
from blackmamba.uikit import * # UIKeyModifier*
import blackmamba.ide as ide
bm.register_default_key_commands()
def launch_stash():
ide.run_action('StaSh') # <- editor action custom title, case sensitive
# or ide.run_script('launch_stash.py')
register_key_command('S', UIKeyModifierCommand | UIKeyModifierShift,
launch_stash, 'Launch StaSh...')
ide.run_actionaccepts editor action custom title and it's case sensitive. Another option is toignoreeditor actions and use justide.run_scriptwith script name.zrzka
|
希望通过对这两篇文章的学习,能够对Channels有更加深入的了解,使用起来得心应手游刃有余
通过上一篇《Django使用Channels实现WebSocket--上篇》的学习应该对Channels的各种概念有了清晰的认知,可以顺利的将Channels框架集成到自己的Django项目中实现WebSocket了,本篇文章将以一个Channels+Celery实现web端tailf功能的例子更加深入的介绍Channels
先说下我们要实现的目标:所有登录的用户可以查看tailf日志页面,在页面上能够选择日志文件进行监听,多个页面终端同时监听任何日志都互不影响,页面同时提供终止监听的按钮能够终止前端的输出以及后台对日志文件的读取
最终实现的结果见下图
接着我们来看下具体的实现过程
所有代码均基于以下软件版本: - python==3.6.3 - django==2.2 - channels==2.1.7 - celery==4.3.0
celery4在windows下支持不完善,所以请在linux下运行测试
我们只希望用户能够查询固定的几个日志文件,就不是用数据库仅借助settings.py文件里写全局变量来实现数据存储
在settings.py里添加一个叫TAILF的变量,类型为字典,key标识文件的编号,value标识文件的路径
TAILF = {
1: '/ops/coffee/error.log',
2: '/ops/coffee/access.log',
}
假设你已经创建好了一个叫tailf的app,并添加到了settings.py的INSTALLED_APPS中,app的目录结构大概如下
tailf - migrations - __init__.py - __init__.py - admin.py - apps.py - models.py - tests.py - views.py
依然先构建一个标准的Django页面,相关代码如下
url:
from django.urls import path
from django.contrib.auth.views import LoginView,LogoutView
from tailf.views import tailf
urlpatterns = [
path('tailf', tailf, name='tailf-url'),
path('login', LoginView.as_view(template_name='login.html'), name='login-url'),
path('logout', LogoutView.as_view(template_name='login.html'), name='logout-url'),
]
因为我们规定只有通过登录的用户才能查看日志,所以引入Django自带的LoginView,logoutView帮助我们快速构建Login,Logout功能
指定了登录模板使用login.html,它就是一个标准的登录页面,post传入username和password两个参数即可,不贴代码了
view:
from django.conf import settings
from django.shortcuts import render
from django.contrib.auth.decorators import login_required
# Create your views here.
@login_required(login_url='/login')
def tailf(request):
logDict = settings.TAILF
return render(request, 'tailf/index.html', {"logDict": logDict})
引入了login_required装饰器,来判断用户是否登录,未登录就给跳到/login登录页面
logDict 去setting里取我们定义好的TAILF字典赋值,并传递给前端
template:
{% extends "base.html" %}
{% block content %}
<div class="col-sm-8">
<select class="form-control" id="file">
<option value="">选择要监听的日志</option>
{% for k,v in logDict.items %}
<option value="{{ k }}">{{ v }}</option>
{% endfor %}
</select>
</div>
<div class="col-sm-2">
<input class="btn btn-success btn-block" type="button" onclick="connect()" value="开始监听"/><br/>
</div>
<div class="col-sm-2">
<input class="btn btn-warning btn-block" type="button" onclick="goclose()" value="终止监听"/><br/>
</div>
<div class="col-sm-12">
<textarea class="form-control" id="chat-log" disabled rows="20"></textarea>
</div>
{% endblock %}
前端拿到TAILF后通过循环的方式填充到select选择框下,因为数据是字典格式,使用logDict.items的方式可以循环出字典的key和value
这样一个日志监听页面就完成了,但还无法实现日志的监听,继续往下
日志监听功能主要的设计思路就是页面跟后端服务器建立websocket长连接,后端通过celery异步执行while循环不断的读取日志文件然后发送到websocket的channel里,实现页面上的实时显示
接着我们来集成channels
webapp/routing.py
from channels.auth import AuthMiddlewareStack
from channels.routing import ProtocolTypeRouter, URLRouter
from django.urls import path, re_path
from chat.consumers import ChatConsumer
from tailf.consumers import TailfConsumer
application = ProtocolTypeRouter({
'websocket': AuthMiddlewareStack(
URLRouter([
path('ws/chat/', ChatConsumer),
re_path(r'^ws/tailf/(?P<id>\d+)/$', TailfConsumer),
])
)
})
直接将路由信息写入到了URLRouter里,注意路由信息的外层多了一个list,区别于上一篇中介绍的写路由文件路径的方式
页面需要将监听的日志文件传递给后端,我们使用routing正则P<id>\d+传文件ID给后端程序,后端程序拿到ID之后根据settings中指定的TAILF解析出日志路径
routing的写法跟Django中的url写法完全一致,使用re_path匹配正则routing路由
tailf/consumers.py文件中
import json
from channels.generic.websocket import WebsocketConsumer
from tailf.tasks import tailf
class TailfConsumer(WebsocketConsumer):
def connect(self):
self.file_id = self.scope["url_route"]["kwargs"]["id"]
self.result = tailf.delay(self.file_id, self.channel_name)
print('connect:', self.channel_name, self.result.id)
self.accept()
def disconnect(self, close_code):
# 中止执行中的Task
self.result.revoke(terminate=True)
print('disconnect:', self.file_id, self.channel_name)
def send_message(self, event):
self.send(text_data=json.dumps({
"message": event["message"]
}))
这里使用Channels的单通道模式,每一个新连接都会启用一个新的channel,彼此互不影响,可以随意终止任何一个监听日志的请求
connect
我们知道self.scope类似于Django中的request,记录了丰富的请求信息,通过self.scope["url_route"]["kwargs"]["id"]取出routing中正则匹配的日志ID
然后将id和channel_name传递给celery的任务函数tailf,tailf根据id取到日志文件的路径,然后循环文件,将新内容根据channel_name写入对应channel
disconnect
当websocket连接断开的时候我们需要终止Celery的Task执行,以清除celery的资源占用
终止Celery任务使用到revoke指令,采用如下代码来实现
self.result.revoke(terminate=True)
注意self.result是一个result对象,而非id
参数terminate=True的意思是是否立即终止Task,为True时无论Task是否正在执行都立即终止,为False(默认)时需要等待Task运行结束之后才会终止,我们使用了While循环不设置为True就永远不会终止了
终止Celery任务的另外一种方法是:
from webapp.celery import app
app.control.revoke(result.id, terminate=True)
send_message
方便我们通过Django的view或者Celery的task调用给channel发送消息,官方也比较推荐这种方式
上边已经集成了Channels实现了WebSocket,但connect函数中的celery任务tailf还没有实现,下边来实现它
关于Celery的详细内容可以看这篇文章:《Django配置Celery执行异步任务和定时任务》,本文就不介绍集成使用以及细节原理,只讲一下任务task
task实现代码如下:
from __future__ import absolute_import
from celery import shared_task
import time
from channels.layers import get_channel_layer
from asgiref.sync import async_to_sync
from django.conf import settings
@shared_task
def tailf(id, channel_name):
channel_layer = get_channel_layer()
filename = settings.TAILF[int(id)]
try:
with open(filename) as f:
f.seek(0, 2)
while True:
line = f.readline()
if line:
print(channel_name, line)
async_to_sync(channel_layer.send)(
channel_name,
{
"type": "send.message",
"message": "微信公众号【运维咖啡吧】原创 版权所有 " + str(line)
}
)
else:
time.sleep(0.5)
except Exception as e:
print(e)
这里边主要涉及到Channels中另一个非常重要的点:从Channels的外部发送消息给Channel
其实上篇文章中检查通道层是否能够正常工作的时候使用的方法就是从外部给Channel通道发消息的示例,本文的具体代码如下
async_to_sync(channel_layer.send)(
channel_name,
{
"type": "send.message",
"message": "微信公众号【运维咖啡吧】原创 版权所有 " + str(line)
}
)
channel_name 对应于传递给这个任务的channel_name,发送消息给这个名字的channel
type 对应于我们Channels的TailfConsumer类中的send_message方法,将方法中的_换成.即可
message 就是要发送给这个channel的具体信息
上边是发送给单Channel的情况,如果是需要发送到Group的话需要使用如下代码
async_to_sync(channel_layer.group_send)(
group_name,
{
'type': 'chat.message',
'message': '欢迎关注公众号【运维咖啡吧】'
}
)
只需要将发送单channel的send改为group_send,channel_name改为group_name即可
需要特别注意的是:使用了channel layer之后一定要通过async_to_sync来异步执行
后端功能都已经完成,我们最后需要添加前端页面支持WebSocket
function connect() {
if ( $('#file').val() ) {
window.chatSocket = new WebSocket(
'ws://' + window.location.host + '/ws/tailf/' + $('#file').val() + '/');
chatSocket.onmessage = function(e) {
var data = JSON.parse(e.data);
var message = data['message'];
document.querySelector('#chat-log').value += (message);
// 跳转到页面底部
$('#chat-log').scrollTop($('#chat-log')[0].scrollHeight);
};
chatSocket.onerror = function(e) {
toastr.error('服务端连接异常!')
};
chatSocket.onclose = function(e) {
toastr.error('websocket已关闭!')
};
} else {
toastr.warning('请选择要监听的日志文件')
}
}
上一篇文章中有详细介绍过websocket的消息类型,这里不多介绍了
至此我们一个日志监听页面完成了,包含了完整的监听功能,但还无法终止,接着看下面的内容
web页面上“终止监听”按钮的主要逻辑就是触发WebSocket的onclose方法,从而可以触发Channels后端consumer的disconnect方法,进而终止Celery的循环读取日志任务
前端页面通过.close()可以直接触发WebSocket关闭,当然你如果直接关掉页面的话也会触发WebSocket的onclose消息,所以不用担心Celery任务无法结束的问题
function goclose() {
console.log(window.chatSocket);
window.chatSocket.close();
window.chatSocket.onclose = function(e) {
toastr.success('已终止日志监听!')
};
}
至此我们包含完善功能的Tailf日志监听、终止页面就全部完成了
两篇文章结束不知道你是否对Channels有了更深一步的了解,能够操刀上手将Channels用在自己的项目中,实现理想的功能。个人觉得Channels的重点和难点在于对channel layer的理解和运用,真正的理解了并能熟练运用,相信你一定能够举一反三完美实现更多需求。最后如果对本文的demo源码感兴趣可以关注微信公众号【运维咖啡吧】后台回复小二加我微信向我索取
|
Black Mamba - open quickly and other shortcuts
Hi all,
I mainly work with Pythonista on the biggest iPad with external keyboard. And I miss some features like Open quickly, registering shortcuts for my scripts, ... so I decided to add them by myself. Hope it will be natively supported in the future.
Here's screen recording of what I can do with external keyboard only. If you would like to try it, feel free to clone
allfiles from pythonista-site-packages-3 repository to your localsite-packages-3folder. Just download them or usegitvia StaSH.
Then you can hit ...
Cmd /- to toggle comments
Cmd N- to open new tab and to show new file dialog
Cmd Shift N- to open just new tab
Cmd 0(zero) - to toggle library view (I call it navigator)
Cmd W- to close current tab
Cmd Shift W- to close all tabs except current one
Cmd O(letter o) - to quickly open files
If you need more shortcuts for more actions, just let me know and I'll try to add them.
WARNINGIt works, but it'sexperimental,dangerous. There's some swizzling, some direct calls to ObjC instances, passing guessed parameter values, etc. It can crash your Pythonista, you can lost data, ... I warned you :) If you will modify this code and Pythonista will crash during startup, just openpythonista3://in your browser and fix your changes.
Will write more issues, so, Pythonista modules can be enhanced (like
editormodule functions to close tab, open file, etc.). Then I can remove some of these crazy calls.
Back to open quickly ... It just supports Python (
.py) and Markdown (.md) files. That's because I have no clueyetwhat should I pass as editor type, when force reload should be used, ... Also some directories are excluded, so, you know why if some files are missing in open quickly dialog. Also the open quickly dialog has hidden title bar -> no close button -> you need to hit Esc (or Ctrl [ on smart keyboard) to close it.
Anyway, it's pretty scary, fascinating, ... that we can do all these things directly on iPad. Many thanks to Ole for this wonderful tool. This tool let me play with Python on iPad and I can drive our AWS services directly from iPad as well. I'm not forced to bring MBP everywhere, iPad is enough :) Thanks again.
Enjoy!
zrzka
@zrzka said:
Also I just added Cmd-Shift-0 (zero) to open Dash with a) selected text as a query, b) if there's no selection, cursor position is used to find identifier around it.
Very nice idea!
@zrzka, thanks. This is looking pretty cool. Is there a strategy that i can clone the repo into the site-package-3 dir. i cloned it then moved it, i was a little unsure of what i was doing. Maybe this cant be done because of site-packages-3 being a special folder. Not sure. But maybe I should just clone the blackmamba dir and deal with the startup seperately. I just want to be able to update as you update by doing a git pull, rather than moving etc...
I have one suggestion, is to do print statements in your startup file. Just a few to say what's going on. I do that for my simple startup for changing the screen brightness, font size, turning off animations etc...Anyway, just a suggestion. I inserted a print line just saying 'doing some blackmamba stuff'.
You can clone it directly into the
site-packages-3folder in this way ...
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
.git (352.0B) 2017-08-12 10:15:40
LICENSE (1.2K) 2017-08-12 10:15:40
Readme.md (2.2K) 2017-08-12 10:15:40
blackmamba (384.0B) 2017-08-12 10:15:40
external_screen.py (5.6K) 2017-08-12 10:15:40
pythonista_startup.py (1.3K) 2017-08-12 10:15:40
[site-packages-3]$ rm -rf .git LICENSE Readme.md blackmamba external_screen.py pythonista_startup.py
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
.git (352.0B) 2017-08-12 10:16:16
LICENSE (1.2K) 2017-08-12 10:16:15
Readme.md (2.2K) 2017-08-12 10:16:15
blackmamba (384.0B) 2017-08-12 10:16:15
external_screen.py (5.6K) 2017-08-12 10:16:15
pythonista_startup.py (1.3K) 2017-08-12 10:16:16
[site-packages-3]$
And then you can update it via
git pull...
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ git pull
[site-packages-3]$
Added some comments as requested and I also created
blackmamba.startupfile where all the startup things are done. Then I removedpythonista_startup.pyfrom GitHub repo, so, it will not interfere with yourpythonista_startup.pyfile and did add Installation section to the readme. Just copy & paste the lines from this section to yourpythonista_startup.pyfile. This will be stable interface for BM initialization. Alsopythonista_startup.pyfile is ignored by git (.gitignore), feel free to modify it, add your custom routines,git pullwill be happy.
Yeah, I know it's akward installation process, but because I still consider it as
hack, not going to provide better way to install / update it. And I hope that some of these features will be added to the Pythonista, thus lot of things will be removed, new ones added, ...
@zrzka, sorry. I cant see what I am doing wrong. I have done this a few times so all your cmds are not there. But I think you can see from the printout below, the site-packages-3 dir is empty before I do the clone. Bu you can see I only end up with a single folder in the site-packages-3 dir after the clone.
I cant see what I am doing wrong
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
pythonista-site-packages-3 (256.0B) 2017-08-12 15:39:32
[site-packages-3]$
I had probably older StaSH. Did update StaSH and you're right, now
git clonebehaves correctly. So the clone command is:
git clone https://github.com/zrzka/pythonista-site-packages-3.git .(<- space dot at the end)
Phuket2
@zrzka , perfect, works great. Thanks for your help. You might want to make a note in your installation instructions about the trailing period. I can see you changed it, but this would trip up newbies like me.
Done :) More updates later, going to take week off :)
Phuket2
@zrzka , ok. Enjoy. Thanks again.
wolf71
cool,perfect,thanks.
and wish @omz next pythonista update can support it native.
@zrzka, hey. I know you are off right now, hope you are having a good break. But when you return could you consider having a (dedicated) key combo (cmd-something) that you can apply to invoke a given wrench menu item. Eg. Maybe the user could pass the name of the wrench item name as a param in the bm.start('StaSh'). I guess it could also be a .py filename. Although that seems a bit more on the wild side. Maybe you have some better ideas than that.
I also mention dedicated, because I think it would get infinitely more complicated to let users to map their own keys.
Anyway thanks again, still only a day or so with what you have done so far and its very useful for me with the Apple iPad Pro keyboard.
@Phuket2 thanks for the suggestions.
Open Quickly
I would like to reuse Open Quickly dialog (when I refactor it) for:
Run Quickly (search for just .pyand run it, basically emulation of open and tapping on play),
Wrench Quickly, same idea, but you can search for wrench items.
UI for mapping keys
I'm not going to provide UI for mapping keys, because it's a lot of work, which can be replaced with something more simpler. I can remove HW shortcuts registration from
bm.start, can provide something likebm.register_default_key_commands. And if you don't call this function in yourpythonista_startup.pyfile, feel free to map your own shortcuts viabm.register_key_command. Or call it and add your own afterbm.start().Shortcut for the wrench item
Just do this in your
pythonista_startup.pyfile:
#!python3
import blackmamba.startup as bm
import blackmamba.key_commands as bkc
import blackmamba.uikit as bui
def launch_wrench_item(name):
print('Wrench item: {}'.format(name))
def launch_stash():
launch_wrench_item('StaSh')
bm.start()
bkc.register_key_command(
bkc.PYTHONISTA_SCOPE_EDITOR,
'S',
bui.UIKeyModifierCommand | bui.UIKeyModifierShift,
launch_stash,
'Launch StaSh')
This maps
Cmd-Shift-Stolaunch_stash, where you can do whatever you want :)
Run Quickly (search for just
Some breaking changes pushed. Check Usage section in the readme. Summary:
external_screen.pymoved toblackmamba/experimental
blackmamba/startup.pytrashed
register_default_key_commandsintroduced inblackmamba/__init__.py
removed scope in blackmamba/key_commands.py
usage examples updated
repository renamed (pythonista-site-packages-3->blackmamba)
If you don't want to check all these changes, just update your
pythonista_startup.pycontent:
#!python3
import blackmamba as bm
bm.register_default_key_commands()
@zrzka , hey thanks. The git pull worked perfectly. Thanks for your help to get it setup correctly. Makes a huge difference being able to do that. I haven't added my own keys yet, but will put some thought into it. I am always running 'Check Style' and 'Reformat Code' these days so I am guessing, i just need to find these scripts and run them from function stubs like you do with the hallo example. Anyway, will give it a go later.
Thanks again. This is really fantastic with an external keyboard. I am sure a lot of other apps would be envious of this ability.
Oppps, sorry, I missed the post above this...Looks like the wrench keys have been handled. That's great. I will try them now!!!!
Phuket2
@Phuket2 wrench item(s) are not handled yet. It's just silly example how to print smth with keyboard shortcut. I'll try to add run script / wrench item today. Then you'll be able to use it.
@zrzka , ok. Cool. I had a lame attempt to get it working and started going down a rabbit hole. But for me it will be a big help. Esp the check styles/reformat code. @ccc has beaten me with a big stick so I don't dare to push anything anymore until I have done all the checks :) I find it super annoying and time consuming. But I am happy I am staring to take the time to do things properly. Just a matter of time it will become second nature.
Ok, will keep a look out for the next update :)
@Phuket2 I did refactor my picker, thus I was able to add Run Quickly... (Cmd Shift R) and Wrench Quickly... (Cmd Option R).
Butit does work only and only if scripts are Python 3 compatible. Otherwise you can run them, but they will fail to execute. See another thread for more info. Sorry for this, will try to solve it somehow.
It's useless for StaSh (Python 2) and maybe for many more scripts.
Another update ...
wrench_pickerrenamed toaction_picker
Wrench Quickly...renamed toAction Quickly...with new shortcutCmd-Shift-A
ide.run_actionadded (see example below)
slight Action Quickly... UI improvements
title is custom title or just script name without extension if title is not provided
subtitle is script path
... and here's an example how to register custom shortcut to launch StaSh for example ...
#!python3
import blackmamba as bm
from blackmamba.key_commands import register_key_command
from blackmamba.uikit import * # UIKeyModifier*
import blackmamba.ide as ide
bm.register_default_key_commands()
def launch_stash():
ide.run_action('StaSh') # <- editor action custom title, case sensitive
# or ide.run_script('launch_stash.py')
register_key_command('S', UIKeyModifierCommand | UIKeyModifierShift,
launch_stash, 'Launch StaSh...')
ide.run_actionaccepts editor action custom title and it's case sensitive. Another option is toignoreeditor actions and use justide.run_scriptwith script name.zrzka
Another installation method added (StaSh & Pip). Check readme. This is preferred way to install Black Mamba. The old git way still works and will work.
|
Proxy Servers for Noobs
Intro
After my recent post, Discord Reverse Proxy, there have been a lot of people, like @Baconman321 and @DynamicSquid and @Jeydin21 who have asked how proxies work, and what they do, so I decided to follow up that post with this one, a tutorial on how to make a proxy server and a breakdown of how it works!
The Breakdown
The official definition of a proxy server is a server application or appliance that acts as an intermediary for requests from clients seeking resources from servers that provide those resources. Take a look at this image:
In the visual, it's essential to note that Alice is not asking Bob what time it is, the proxy is asking on Alice's behalf. Because of this, Alice's privacy is protected as Bob doesn't know that the Proxy told Alice it's response. That is the core concept of a proxy server. Once you understand that, there are all kinds of crazy ideas that you do with them!
Example Proxy
In our example, I'll be using Python and Flask because most people know Python, and Flask is really lightweight. Let's make it step by step.
Creating a Server
Let's get started with setting up Flask, pretty simple:
import flask
app = flask.Flask(__name__)
@app.route('/')
def proxy():
return "I'm Alive!"
app.run(host="0.0.0.0", port=8080)
A few lines of code and we have a web server up and running! Now let's set up the intermediate between the client and the website.
Adding the Intermediate
We're gonna userequeststo handle the retrieval of resources in our proxy. Let's add a few lines to the code we have so far:
import flask
import requests
app = flask.Flask(__name__)
target = "https://google.com/"
Now that we know what site we want to proxy let's add that functionality:
@app.route('/', defaults={'path': ''})
@app.route('/<path:path>', methods=["GET", "POST"])
def proxy(path: str):
r = requests.get(f"{target}{path}")
return r.content
And just like that, we have our proxy up and running! It will serve as a middle man between you and google.com. The best part about a proxy is that you can do anything you want with the r variable before you return its content! This makes proxies extremely viable for tracking and filtering activities.
Fleshing out the Proxy
Right now, our proxy has quite a few limitations. It can only handleGETrequests, meaning that users can't submit forms and thegoogle.comserver can't use other methods. Let's add that functionality by checking for each method and processing appropriately.
@app.route('/', defaults={'path': ''})
@app.route('/<path:path>', methods=["GET", "POST"])
def proxy(path: str):
if flask.request.method == "GET":
r = requests.get(f"{target}{path}")
return r.content
elif flask.request.method == "POST":
r = requests.post(f"{target}{path}", json=flask.request.get_json())
return r.content
The GET method is the same as what we had before, but now that we're handling POST requests, we have to take the extra step of getting any JSON data that might exist in the request, which is why json=flask.request.get_json() was added to the method.
Wrapping It Up
Once we've finished all that our final product should look like this:
import flask
import requests
app = flask.Flask(__name__)
target = "https://google.com/"
@app.route('/', defaults={'path': ''})
@app.route('/<path:path>', methods=["GET", "POST"])
def proxy(path: str):
if flask.request.method == "GET":
r = requests.get(f"{target}{path}")
return r.content
elif flask.request.method == "POST":
r = requests.post(f"{target}{path}", json=flask.request.get_json())
return r.content
app.run(host="0.0.0.0", port=8080)
Conclusion
Thanks for reading, and I hope you learned something new today! (=^.^=)
|
Тема: Потрібна підказка!
Підкажіть, будь ласка, як це можна зробити? Можливо приклад є якийсь?
"Модифікуйте програму побудови геометричних фігур наступним чином: додайте головне меню, в якому пункт «Налаштуваня» містить підпункти – «Налаштуваня зображень» та «Налаштування тексту». При виборі пункту «Налаштування зображень» з’являється діалогове вікно для вибору кольору та розміру кожної з тьох фігур. При натискані на підпункт головного меню «Налаштування тексту» аналогічно зявляється діалогове вікно для налаштування розміру і колору тексу до кожного зображення."
def triangle():
canvas.coords(r, (0, 0, 0, 0))
canvas.coords(c, (0, 0, 0, 0))
canvas.itemconfig(t, fill='yellow', outline='white')
canvas.coords(t, (50, 200, 340, 200, 110, 60))
text.delete(1.0, END)
text.insert(1.0, 'Зображення трикутника')
text.tag_add('title', '1.0', '1.end')
text.tag_config('title', font=('Times', 14), foreground='blue')
def rectangle():
canvas.coords(t, (0, 0, 0, 0, 0, 0))
canvas.coords(c, (0, 0, 0, 0))
canvas.itemconfig(r, fill='blue', outline='white')
canvas.coords(r, (80, 50, 320, 200))
text.delete(1.0, END)
text.insert(1.0, 'Зображення прямокутника')
text.tag_add('title', '1.0', '1.end')
text.tag_config('title', font=('Times', 14), foreground='black')
def oval():
canvas.coords(r, (0, 0, 0, 0))
canvas.coords(t, (0, 0, 0, 0, 0, 0))
canvas.itemconfig(c, fill = 'red', outline = 'black')
canvas.coords(c, (300, 40, 100, 240))
text.delete(1.0, END)
text.insert(1.0, 'Зображення кола')
text.tag_add('title', '1.0', '1.end')
text.tag_config('title', font=('Times', 14), foreground='black')
def cleaning():
canvas.coords(r, (0, 0, 0, 0))
canvas.coords(t, (0, 0, 0, 0, 0, 0))
canvas.coords(c, (0, 0, 0, 0))
text.delete(1.0, END)
text.insert(1.0, 'Очищення полотна')
text.tag_add('title', '1.0', '1.end')
text.tag_config('title', font=('Times', 14), foreground='black')
win = Tk()
b_triangle = Button(text = "Трикутник", width = 15, command = triangle)
b_rectangle = Button(text = "Прямокутник", width = 15, command = rectangle)
b_oval = Button(text = "Коло", width=15, command = oval)
b_cleaning = Button(text = "Очищення полотна", width=15, command = cleaning)
canvas = Canvas(width=400, height=300, bg='#fff')
text = Text(width=55, height=5, bg='#fff', wrap=WORD)
t = canvas.create_polygon(0, 0, 0, 0, 0, 0)
r = canvas.create_rectangle(0, 0, 0, 0)
c = canvas.create_oval(0, 0, 0, 0)
b_triangle.grid(row=0, column=0)
b_rectangle.grid(row=1, column=0)
b_oval.grid(row=2, column=0)
b_cleaning.grid(row=3, column=0)
canvas.grid(row=0, column=1, rowspan=10)
text.grid(row=11, column=1, rowspan=3)
win.mainloop()
|
from datetime import time
service_duration = time(hour=1) # Ñколько занимает времени уÑлуга
intervals = 30 # minutes. здеÑÑŒ может быть любой тип, который удобнее Ð´Ð»Ñ Ñ€ÐµÑˆÐµÐ½Ð¸Ñ Ð·Ð°Ð´Ð°Ñ‡Ð¸
# рабочее времÑ
times_list = [
{'start': time(10, 00), 'end': time(16, 00)},
{'start': time(18, 00), 'end': time(20, 00)}
]
# ÑущеÑтвующие запиÑи
reservations = [
# time: Ð²Ñ€ÐµÐ¼Ñ Ð·Ð°Ð¿Ð¸Ñи
# end: когда закончитÑÑ Ð·Ð°Ð¿Ð¸ÑÑŒ (time + duration)
# duration: Ñколько времени занимает запиÑÑŒ
{'time': time(12, 00), 'end': time(13, 00), 'duration': time(1, 00)},
{'time': time(14, 00), 'end': time(15, 00), 'duration': time(1, 00)}
]
ÐÐµÐ¾Ð±Ñ Ð¾Ð´Ð¸Ð¼Ð¾ полÑÑиÑÑ ÑпиÑок инÑеÑвалов вÑемен, в коÑоÑÑе можно запиÑаÑÑÑÑ Ð½Ð° ÑаÑовÑÑ ÑÑлÑÐ³Ñ , ÑÑиÑÑÐ²Ð°Ñ ÑабоÑие ÑаÑÑ Ð¸ Ñже занÑÑое вÑÐµÐ¼Ñ (из ÑÑÑеÑÑвÑÑÑÐ¸Ñ Ð·Ð°Ð¿Ð¸Ñей). РезÑлÑÑÐ°Ñ ÑазбиÑÑ Ð½Ð° intervals Ð´Ð»Ñ Ð·Ð°Ð¿Ð¸Ñи.
Ðолжно полÑÑиÑÑÑÑ ÑледÑÑÑее:
result = [
time(10, 00),
time(10, 30),
time(11, 00),
time(13, 00),
time(15, 00),
time(18, 00),
time(18, 30),
time(19, 00)
]
result: [datetime.time(10, 0), datetime.time(10, 30), datetime.time(11, 0), datetime.time(13, 0), datetime.time(15, 0), datetime.time(18, 0), datetime.time(18, 30), datetime.time(19, 0)]
|
This portrait is that of a gentleman named Edmond Belamy and it went under the hammer at Christie's, the famous auction house, for a jaw-dropping $432,500. However, the signature on the piece, spotted by eagle-eyed readers (in the bottom right corner), would appear very strange.
The signature is this equation:
$$\min \limits_{G} \max \limits_{D} \mathbb{E}_{x} [log (D(x))] + \mathbb{E}_{z} [log (1- D(G(z)))]$$
The artwork was made by a generative adversarial neural network, which is just a fancy term for a neural network which tries to get better by fighting itself. The network has two parts. One which makes an educated guess, and the other which tries to discriminate, and differentiate the unwanted images from the good ones. This information flows back to the guesser, which now attempts to generate better guesses. The two adversarial components thus improve the entire model. The above image was from a series of art pieces on the fictional Belamy family, by Obvious Art.
The tutorial requires some knowledge of TensorFlow (ideally you should have built CNNs before trying this out).
Implementing Neural Style Transfer
In this article however, we focus on a different type of generation, with neural style transfer techniques. This was first covered in the paper by Gatys et al., A Neural Algorithm of Artistic Style. The process is quite different from GANs. We make use of a traditional convolutional network to transfer features from an image, and then rebuild them with stylistic content fed from another image.
If you have messed around with convolutional neural networks, you would know that they work by learning features from the images in the dataset, and then attempt to extract those features from a new image shown to it. Similarly, we can extract these features from the middle of the network, and try to reconstruct the original image with them. No doubt, the result would be slightly different. Now what if we modify these features, and instead add a stylistic component to them? The reconstructed image would be quite different, and would show traces of the style modification to its features. This is the key idea behind neural style transfer.
We run into a tiny problem. To extract features from a network, we should have already trained it. For this purpose, instead of designing a model ourselves, and then training it from scratch, we will use the VGG model from TensorFlow. To follow along with the code used for this article in one place, you can go here.
Visual Geometry Group Model
The VGG model is a set of deep convolutional networks which was the first runner up in the ILSVRC-2014, losing by a narrow margin to GoogLeNet. The VGG model takes in an input $224 \times 224 \times 3$ image and applies a series of convolutions and poolings to it, in order to extract features. The model can have varying depths. For the purpose of this style transfer, the VGG19 model was used but feel free to experiment with the 16 layers deep version as well!
We can visualize what kind of features are extracted from an image, using this picture of the BITS Pilani Rotunda taken from reddit as an example. Below the picture, is a set of selected images from its feature map, and in it, we can see how the neurons in the network fire. It consists of outlines, light spots, dark spots and so on, and I do encourage you to try this out with a variety of images. As we go into the deeper layers of the model, we find the patterns getting more obscure, as they simply start representing just the presence or absence of a feature. In our style transfer, we will simply take these features from the content image to rebuild our generated image.
To build this yourself, you'll need the following code. First we import all the libraries and initialize an instance of the VGG model. We then dump out all the layers which we won't require and consider only 5 layers for our needs. We then redefine our model with only those layers.
from tensorflow.keras.applications.vgg19 import VGG19, preprocess_input
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.keras.models import Model
import matplotlib.pyplot as plt
from numpy import expand_dims
model = VGG19()
ixs = [2, 5, 10, 15, 20]
outputs = [model.layers[i+1].output for i in ixs]
model = Model(inputs=model.inputs, outputs=outputs)
Then we run some standard Keras and numpy code to convert our images into a form the model can understand. The image_path will be a variable which you will have to add and set to the path of the target image. The VGG model works best with images of the shape (None, 224, 224, 3) so we set that as the size. We make it an array, reshape it, and process it for the model.
image = load_img(image_path, target_size=(224, 224))
image = img_to_array(image)
image = expand_dims(image, axis=0)
image = preprocess_input(image)
We now put the model to work and get it to predict the image. Our modified models means that the predict method returns the feature maps of the image we fed in, and not the actual prediction.
feature_maps = model.predict(image)
Finally, we iterate through the maps and use matplotlib to show them to us.
square = 8
for fmap in feature_maps:
fig = plt.figure(figsize=(32, 19))
ix = 0
for _ in range(square):
for _ in range(square):
ax = fig.add_subplot(square, square, ix+1)
ax.set_xticks([])
ax.set_yticks([])
plt.imshow(fmap[0, :, :, ix], cmap='gray')
ix += 1
fig.show()
Starting with Style Transfer
Now, let's get around to the task of the actual style transfer. For this, we need two images - a content image and a style image. The content image would be broken down into its features and reconstructed from it. As usual, we start off by importing the necessary libraries.
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.applications.vgg19 import preprocess_input
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid
import time
import PIL.Image
We now write some code to load our images in. Again, content_path and style_path are two variables which you will have to define to the path of the image.
def load_image(image):
dim = 224
image = plt.imread(image)
img = tf.image.convert_image_dtype(image, tf.float32)
img = tf.image.resize(img, [dim, dim])
img = img[tf.newaxis, :]
return img
content = load_image(content_path)
style = load_image(style_path)
Let's also build a method to go the other way round, a method to convert the tensor to an image.
def tensor_to_image(tensor):
tensor = tensor * 255
tensor = np.array(tensor, dtype=np.uint8)
if np.ndim(tensor) > 3:
assert tensor.shape[0] == 1
tensor = tensor[0]
return PIL.Image.fromarray(tensor)
We import the VGG model, and as before, make some changes to it for the purpose of style transfer.
def custom_vgg_model(layer_names, model):
outputs = [model.get_layer(name).output for name in layer_names]
model = Model([vgg_model.input], outputs)
return model
vgg_model = tf.keras.applications.VGG19(include_top=False, weights='imagenet')
vgg_model.trainable = False
We also initialize the layers we want to consider for the transfer. To remove a layer, simply add a # just before the layer to comment it, thus ignoring it from the code. Generally, you would want to block out most of the content layers, but you can experiment with this yourself.
content_layers = ['block1_conv2',
'block2_conv2',
'block3_conv2',
'block4_conv2',
'block5_conv2'
]
style_layers = ['block1_conv1',
'block2_conv1',
'block3_conv1',
'block4_conv1',
'block5_conv1'
]
num_content_layers = len(content_layers)
num_style_layers = len(style_layers)
We will now write a function to make the custom model, flexible to the selections of the above block.
def custom_vgg_model(layer_names, model):
outputs = [model.get_layer(name).output for name in layer_names]
model = Model([vgg_model.input], outputs)
return model
We make use of a "gram matrix" to delocalise the features from the style image. To make this, we take a convoluted version of the style image. Let's assume it is of size $m \times n \times f$, with $f$ feature filters. This is then reshaped into an $f \times mn$ matrix, which is then multiplied with its transpose to yield a final matrix of size $f \times f$, which will have the features we need.
def gram_matrix(tensor):
temp = tensor
temp = tf.squeeze(temp)
fun = tf.reshape(temp, [temp.shape[2], temp.shape[0]*temp.shape[1]])
result = tf.matmul(temp, temp, transpose_b=True)
gram = tf.expand_dims(result, axis=0)
return gram
Finally, since we want to make a model, we put all of this into a class. We write a standard constructor for the class, and a member function to re-scale the images, which will enable us to pull out the features from them.
class Style_Model(tf.keras.models.Model):
def __init__(self, style_layers, content_layers):
super(Style_Model, self).__init__()
self.vgg = custom_vgg_model(style_layers + content_layers, vgg_model)
self.style_layers = style_layers
self.content_layers = content_layers
self.num_style_layers = len(style_layers)
self.vgg.trainable = False
def call(self, inputs):
inputs = inputs*255.0
preprocessed_input = preprocess_input(inputs)
outputs = self.vgg(preprocessed_input)
style_outputs, content_outputs = (outputs[:self.num_style_layers],
outputs[self.num_style_layers:])
style_outputs = [gram_matrix(style_output)
for style_output in style_outputs]
content_dict = {content_name:value
for content_name, value
in zip(self.content_layers, content_outputs)}
style_dict = {style_name:value
for style_name, value
in zip(self.style_layers, style_outputs)}
return {'content':content_dict, 'style':style_dict}
We also add an instance of the class, and proceed to initialize the extraction functions.
extractor = Style_Model(style_layers, content_layers)
style_targets = extractor(style)['style']
content_targets = extractor(content)['content']
Building the Loss Function
If the image generated is very different from the content image, that's definitely not desirable. We describe this difference as the content loss. Let's assume $T$ is the target image generated and $C$ is the original content image.
$$ L_{content} = \frac{1}{2} \sum (T-C)^2 $$
The content loss is relatively simple. We usually stick to only one layer, as the inclusion of too many layers confuses the network. However, it is different with the style image where we only want to capture the stylistic details, but not image formations. We take a blend of different layers to capture the stylistic strokes. The gram matrix we made helps in ensuring that these strokes don't get localised to its position in the image. For this, we can take a layer weighted average for the loss, with the weights being $w_i$, and the style information from the layer being $S_i$.
$$ L_{style} = \sum w_i (T_i - S_i)^2 $$
However on running the model, we find another problem. There are a lot of style artifacts which get repeated across the image. These high frequency stylings can be reduced by introducing another regularization variable called the variation loss. If we run passes through the styled image to find the edges using a Sobel filter, we will find those artifacts. Adding this to the loss will also reduce them by smoothing them out.
We now define some weights for the various losses to control their influence. You can add another dict for weighting the content layer, but that might not exactly work well with the model. Again, you are free to try out such an implementation. You do not have to comment out bits here, as the weights will simply be ignored if the layer doesn't exist. The final equation for the total loss would look like this.
$$ L_{total} = a L_{content} + b L_{style} + c L_{variational} $$
style_weight = 2e-5
content_weight = 5e5
total_variation_weight = 10
style_weights = {'block1_conv1': 1,
'block2_conv1': 2,
'block3_conv1': 7,
'block4_conv1': 1,
'block5_conv1': 4}
The final custom loss function for this would be the code implementation of the math above.
def total_loss(outputs, image):
style_outputs = outputs['style']
content_outputs = outputs['content']
style_loss = tf.add_n([style_weights[name] * tf.reduce_mean((style_outputs[name]-style_targets[name])**2)
for name in style_outputs.keys()])
style_loss *= style_weight / num_style_layers
content_loss = tf.add_n([tf.reduce_mean((content_outputs[name]-content_targets[name])**2)
for name in content_outputs.keys()])
content_loss *= content_weight / num_content_layers
variation_loss = total_variation_weight * tf.image.total_variation(image)
loss = style_loss + content_loss + variation_loss
return loss
The Style Transfer!
After getting the loss function sorted, we simply have to reduce the loss, as with all neural networks. We use the trusty Adam optimizer for the task. You can tune Adam's hyper parameters yourself, for an image. The abnormally high value of the epsilon seemed to work well for this one.
opt = tf.optimizers.Adam(learning_rate=0.02, beta_1=0.99, epsilon=1e-1)
Let's use this penciled image of a house as our style image and the same picture of the Rotunda as the content image.
To train the model, we build another method for it. To prevent the function from being agonizingly slow, we also decorate it. We clip the outputs of the model to force the values to remain between 0 and 1, in order to prevent the image from getting washed out or darkened. GradientTape is the wonderful overseer provided by TensorFlow. It perches above the running of the model and records all of its mathematical activities to enable automatic differentiation—which is vital for calculating new parameters.
@tf.function()
def train_step(image):
with tf.GradientTape() as tape:
outputs = extractor(image)
loss = total_loss(outputs, image)
grad = tape.gradient(loss, image)
opt.apply_gradients([(grad, image)])
image.assign(tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=1.0))
Finally, we let the training happen, and look at the result. The variables epochs and steps_per_epoch can be modified according to the needs of the user.
target_image = tf.Variable(content)
epochs = 4
steps_per_epoch = 50
step = 0
outputs = []
for n in range(epochs):
Tic = time.time()
for m in range(steps_per_epoch):
train_step(target_image)
step += 1
Toc = time.time()
print("Epoch " + str(n+1) + " took " + str(Toc-Tic) + " sec(s)")
outputs.append(tensor_to_image(target_image))
Shoot. The generated image is absolutely terrible. To improve it, we will need to tweak the hyper parameters for the model. Let's take a look at the hyper parameters we have - the various loss weights, and the choice of the layers to consider. Changing the style layers might do something creative, with the first layers targeting low level arbitrary features, while the last layers focus on intense, major features of the style image. Increasing the weighting of the style loss or the content loss would respectively reduce its influence on the image. Too much tweaking would however confuse the network, resulting in components of the style image getting imposed on the content image.
Finally, after delicate adjustment to find that narrow range where all goes well, we settle on the third and fifth convolutional layers as the heaviest weighted ones. We also give the content around $10^{10}$ times more weight than the style, logarithmically symmetrical around 1. At the end we get a sort of color pencil-ized version of the original image.
Finally, in case you get too tired with the hyper parameter search, fear not, for the good folks at TensorFlow maintain a module called tensorflow_hub. This module has many reusable models, one of which we used for style transfer.
import tensorflow_hub as hub
hub_module = hub.load('https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/1')
stylized_image = hub_module(tf.constant(content), tf.constant(style))[0]
img = tensor_to_image(stylized_image)
plt.imshow(img)
The transition of the image along the style transfer—whose code eagle-eyed observers would have spotted in the training—is available with the full code in one place here.
Links
The original paper for neural style transfer by Gatys et al. : https://arxiv.org/pdf/1508.06576.pdf
A paper by the Tencent AI Lab for real time style transfer from video input : Real Time Neural Style Transfer for Videos
A paper by Li et al with the more mathematical aspects to help choosing the right hyper parameters for your model : https://arxiv.org/pdf/1701.01036.pdf
A neural style transfer tutorial by TensorFlow : https://www.tensorflow.org/tutorials/generative/style_transfer
The documentation of the tensorflow_hub implementation of fast style transfer : https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2
|
Black Mamba - open quickly and other shortcuts
Hi all,
I mainly work with Pythonista on the biggest iPad with external keyboard. And I miss some features like Open quickly, registering shortcuts for my scripts, ... so I decided to add them by myself. Hope it will be natively supported in the future.
Here's screen recording of what I can do with external keyboard only. If you would like to try it, feel free to clone
allfiles from pythonista-site-packages-3 repository to your localsite-packages-3folder. Just download them or usegitvia StaSH.
Then you can hit ...
Cmd /- to toggle comments
Cmd N- to open new tab and to show new file dialog
Cmd Shift N- to open just new tab
Cmd 0(zero) - to toggle library view (I call it navigator)
Cmd W- to close current tab
Cmd Shift W- to close all tabs except current one
Cmd O(letter o) - to quickly open files
If you need more shortcuts for more actions, just let me know and I'll try to add them.
WARNINGIt works, but it'sexperimental,dangerous. There's some swizzling, some direct calls to ObjC instances, passing guessed parameter values, etc. It can crash your Pythonista, you can lost data, ... I warned you :) If you will modify this code and Pythonista will crash during startup, just openpythonista3://in your browser and fix your changes.
Will write more issues, so, Pythonista modules can be enhanced (like
editormodule functions to close tab, open file, etc.). Then I can remove some of these crazy calls.
Back to open quickly ... It just supports Python (
.py) and Markdown (.md) files. That's because I have no clueyetwhat should I pass as editor type, when force reload should be used, ... Also some directories are excluded, so, you know why if some files are missing in open quickly dialog. Also the open quickly dialog has hidden title bar -> no close button -> you need to hit Esc (or Ctrl [ on smart keyboard) to close it.
Anyway, it's pretty scary, fascinating, ... that we can do all these things directly on iPad. Many thanks to Ole for this wonderful tool. This tool let me play with Python on iPad and I can drive our AWS services directly from iPad as well. I'm not forced to bring MBP everywhere, iPad is enough :) Thanks again.
Enjoy!
zrzka
@zrzka, thanks. This is looking pretty cool. Is there a strategy that i can clone the repo into the site-package-3 dir. i cloned it then moved it, i was a little unsure of what i was doing. Maybe this cant be done because of site-packages-3 being a special folder. Not sure. But maybe I should just clone the blackmamba dir and deal with the startup seperately. I just want to be able to update as you update by doing a git pull, rather than moving etc...
I have one suggestion, is to do print statements in your startup file. Just a few to say what's going on. I do that for my simple startup for changing the screen brightness, font size, turning off animations etc...Anyway, just a suggestion. I inserted a print line just saying 'doing some blackmamba stuff'.
You can clone it directly into the
site-packages-3folder in this way ...
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
.git (352.0B) 2017-08-12 10:15:40
LICENSE (1.2K) 2017-08-12 10:15:40
Readme.md (2.2K) 2017-08-12 10:15:40
blackmamba (384.0B) 2017-08-12 10:15:40
external_screen.py (5.6K) 2017-08-12 10:15:40
pythonista_startup.py (1.3K) 2017-08-12 10:15:40
[site-packages-3]$ rm -rf .git LICENSE Readme.md blackmamba external_screen.py pythonista_startup.py
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
.git (352.0B) 2017-08-12 10:16:16
LICENSE (1.2K) 2017-08-12 10:16:15
Readme.md (2.2K) 2017-08-12 10:16:15
blackmamba (384.0B) 2017-08-12 10:16:15
external_screen.py (5.6K) 2017-08-12 10:16:15
pythonista_startup.py (1.3K) 2017-08-12 10:16:16
[site-packages-3]$
And then you can update it via
git pull...
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ git pull
[site-packages-3]$
Added some comments as requested and I also created
blackmamba.startupfile where all the startup things are done. Then I removedpythonista_startup.pyfrom GitHub repo, so, it will not interfere with yourpythonista_startup.pyfile and did add Installation section to the readme. Just copy & paste the lines from this section to yourpythonista_startup.pyfile. This will be stable interface for BM initialization. Alsopythonista_startup.pyfile is ignored by git (.gitignore), feel free to modify it, add your custom routines,git pullwill be happy.
Yeah, I know it's akward installation process, but because I still consider it as
hack, not going to provide better way to install / update it. And I hope that some of these features will be added to the Pythonista, thus lot of things will be removed, new ones added, ...
@zrzka, sorry. I cant see what I am doing wrong. I have done this a few times so all your cmds are not there. But I think you can see from the printout below, the site-packages-3 dir is empty before I do the clone. Bu you can see I only end up with a single folder in the site-packages-3 dir after the clone.
I cant see what I am doing wrong
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
pythonista-site-packages-3 (256.0B) 2017-08-12 15:39:32
[site-packages-3]$
I had probably older StaSH. Did update StaSH and you're right, now
git clonebehaves correctly. So the clone command is:
git clone https://github.com/zrzka/pythonista-site-packages-3.git .(<- space dot at the end)
Phuket2
@zrzka , perfect, works great. Thanks for your help. You might want to make a note in your installation instructions about the trailing period. I can see you changed it, but this would trip up newbies like me.
Done :) More updates later, going to take week off :)
Phuket2
@zrzka , ok. Enjoy. Thanks again.
wolf71
cool,perfect,thanks.
and wish @omz next pythonista update can support it native.
@zrzka, hey. I know you are off right now, hope you are having a good break. But when you return could you consider having a (dedicated) key combo (cmd-something) that you can apply to invoke a given wrench menu item. Eg. Maybe the user could pass the name of the wrench item name as a param in the bm.start('StaSh'). I guess it could also be a .py filename. Although that seems a bit more on the wild side. Maybe you have some better ideas than that.
I also mention dedicated, because I think it would get infinitely more complicated to let users to map their own keys.
Anyway thanks again, still only a day or so with what you have done so far and its very useful for me with the Apple iPad Pro keyboard.
@Phuket2 thanks for the suggestions.
Open Quickly
I would like to reuse Open Quickly dialog (when I refactor it) for:
Run Quickly (search for just .pyand run it, basically emulation of open and tapping on play),
Wrench Quickly, same idea, but you can search for wrench items.
UI for mapping keys
I'm not going to provide UI for mapping keys, because it's a lot of work, which can be replaced with something more simpler. I can remove HW shortcuts registration from
bm.start, can provide something likebm.register_default_key_commands. And if you don't call this function in yourpythonista_startup.pyfile, feel free to map your own shortcuts viabm.register_key_command. Or call it and add your own afterbm.start().Shortcut for the wrench item
Just do this in your
pythonista_startup.pyfile:
#!python3
import blackmamba.startup as bm
import blackmamba.key_commands as bkc
import blackmamba.uikit as bui
def launch_wrench_item(name):
print('Wrench item: {}'.format(name))
def launch_stash():
launch_wrench_item('StaSh')
bm.start()
bkc.register_key_command(
bkc.PYTHONISTA_SCOPE_EDITOR,
'S',
bui.UIKeyModifierCommand | bui.UIKeyModifierShift,
launch_stash,
'Launch StaSh')
This maps
Cmd-Shift-Stolaunch_stash, where you can do whatever you want :)
Run Quickly (search for just
Some breaking changes pushed. Check Usage section in the readme. Summary:
external_screen.pymoved toblackmamba/experimental
blackmamba/startup.pytrashed
register_default_key_commandsintroduced inblackmamba/__init__.py
removed scope in blackmamba/key_commands.py
usage examples updated
repository renamed (pythonista-site-packages-3->blackmamba)
If you don't want to check all these changes, just update your
pythonista_startup.pycontent:
#!python3
import blackmamba as bm
bm.register_default_key_commands()
@zrzka , hey thanks. The git pull worked perfectly. Thanks for your help to get it setup correctly. Makes a huge difference being able to do that. I haven't added my own keys yet, but will put some thought into it. I am always running 'Check Style' and 'Reformat Code' these days so I am guessing, i just need to find these scripts and run them from function stubs like you do with the hallo example. Anyway, will give it a go later.
Thanks again. This is really fantastic with an external keyboard. I am sure a lot of other apps would be envious of this ability.
Oppps, sorry, I missed the post above this...Looks like the wrench keys have been handled. That's great. I will try them now!!!!
Phuket2
@Phuket2 wrench item(s) are not handled yet. It's just silly example how to print smth with keyboard shortcut. I'll try to add run script / wrench item today. Then you'll be able to use it.
@zrzka , ok. Cool. I had a lame attempt to get it working and started going down a rabbit hole. But for me it will be a big help. Esp the check styles/reformat code. @ccc has beaten me with a big stick so I don't dare to push anything anymore until I have done all the checks :) I find it super annoying and time consuming. But I am happy I am staring to take the time to do things properly. Just a matter of time it will become second nature.
Ok, will keep a look out for the next update :)
@Phuket2 I did refactor my picker, thus I was able to add Run Quickly... (Cmd Shift R) and Wrench Quickly... (Cmd Option R).
Butit does work only and only if scripts are Python 3 compatible. Otherwise you can run them, but they will fail to execute. See another thread for more info. Sorry for this, will try to solve it somehow.
It's useless for StaSh (Python 2) and maybe for many more scripts.
Another update ...
wrench_pickerrenamed toaction_picker
Wrench Quickly...renamed toAction Quickly...with new shortcutCmd-Shift-A
ide.run_actionadded (see example below)
slight Action Quickly... UI improvements
title is custom title or just script name without extension if title is not provided
subtitle is script path
... and here's an example how to register custom shortcut to launch StaSh for example ...
#!python3
import blackmamba as bm
from blackmamba.key_commands import register_key_command
from blackmamba.uikit import * # UIKeyModifier*
import blackmamba.ide as ide
bm.register_default_key_commands()
def launch_stash():
ide.run_action('StaSh') # <- editor action custom title, case sensitive
# or ide.run_script('launch_stash.py')
register_key_command('S', UIKeyModifierCommand | UIKeyModifierShift,
launch_stash, 'Launch StaSh...')
ide.run_actionaccepts editor action custom title and it's case sensitive. Another option is toignoreeditor actions and use justide.run_scriptwith script name.zrzka
Another installation method added (StaSh & Pip). Check readme. This is preferred way to install Black Mamba. The old git way still works and will work.
Hmm, StaSh & pip & GitHub doesn't support update. Hmm.
|
IFrame Visualizations
Cloudera Data Science Workbench versions 1.4.2 (and higher) added a new feature that allowed users to enable HTTP security headers for responses to Cloudera Data Science Workbench.
Most visualizations require more than basic HTML. Embedding HTML directly in your console also risks conflicts between different parts of your code. The most flexible way to embed a web resource is using an IFrame:
R
library("cdsw")
iframe(src="https://www.youtube.com/embed/8pHzROP1D-w", width="854px", height="510px")
Python
from IPython.display import HTML
HTML('<iframe width="854" height="510" src="https://www.youtube.com/embed/8pHzROP1D-w"></iframe>')
You can generate HTML files within your console and display them in IFrames using the /cdn folder. The cdn folder persists and services static assets generated by your engine runs. For instance, you can embed a full HTML file with IFrames.
R
library("cdsw")
f <- file("/cdn/index.html")
html.content <- paste("<p>Here is a normal random variate:", rnorm(1), "</p>")
writeLines(c(html.content), f)
close(f)
iframe("index.html")
Python
from IPython.display import HTML
import random
html_content = "<p>Here is a normal random variate: %f </p>" % random.normalvariate(0,1)
file("/cdn/index.html", "w").write(html_content)
HTML("<iframe src=index.html>")
Cloudera Data Science Workbench uses this feature to support many rich plotting libraries such as htmlwidgets, Bokeh, and Plotly.
|
Black Mamba - open quickly and other shortcuts
Hi all,
I mainly work with Pythonista on the biggest iPad with external keyboard. And I miss some features like Open quickly, registering shortcuts for my scripts, ... so I decided to add them by myself. Hope it will be natively supported in the future.
Here's screen recording of what I can do with external keyboard only. If you would like to try it, feel free to clone
allfiles from pythonista-site-packages-3 repository to your localsite-packages-3folder. Just download them or usegitvia StaSH.
Then you can hit ...
Cmd /- to toggle comments
Cmd N- to open new tab and to show new file dialog
Cmd Shift N- to open just new tab
Cmd 0(zero) - to toggle library view (I call it navigator)
Cmd W- to close current tab
Cmd Shift W- to close all tabs except current one
Cmd O(letter o) - to quickly open files
If you need more shortcuts for more actions, just let me know and I'll try to add them.
WARNINGIt works, but it'sexperimental,dangerous. There's some swizzling, some direct calls to ObjC instances, passing guessed parameter values, etc. It can crash your Pythonista, you can lost data, ... I warned you :) If you will modify this code and Pythonista will crash during startup, just openpythonista3://in your browser and fix your changes.
Will write more issues, so, Pythonista modules can be enhanced (like
editormodule functions to close tab, open file, etc.). Then I can remove some of these crazy calls.
Back to open quickly ... It just supports Python (
.py) and Markdown (.md) files. That's because I have no clueyetwhat should I pass as editor type, when force reload should be used, ... Also some directories are excluded, so, you know why if some files are missing in open quickly dialog. Also the open quickly dialog has hidden title bar -> no close button -> you need to hit Esc (or Ctrl [ on smart keyboard) to close it.
Anyway, it's pretty scary, fascinating, ... that we can do all these things directly on iPad. Many thanks to Ole for this wonderful tool. This tool let me play with Python on iPad and I can drive our AWS services directly from iPad as well. I'm not forced to bring MBP everywhere, iPad is enough :) Thanks again.
Enjoy!
zrzka
You can clone it directly into the
site-packages-3folder in this way ...
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
.git (352.0B) 2017-08-12 10:15:40
LICENSE (1.2K) 2017-08-12 10:15:40
Readme.md (2.2K) 2017-08-12 10:15:40
blackmamba (384.0B) 2017-08-12 10:15:40
external_screen.py (5.6K) 2017-08-12 10:15:40
pythonista_startup.py (1.3K) 2017-08-12 10:15:40
[site-packages-3]$ rm -rf .git LICENSE Readme.md blackmamba external_screen.py pythonista_startup.py
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
.git (352.0B) 2017-08-12 10:16:16
LICENSE (1.2K) 2017-08-12 10:16:15
Readme.md (2.2K) 2017-08-12 10:16:15
blackmamba (384.0B) 2017-08-12 10:16:15
external_screen.py (5.6K) 2017-08-12 10:16:15
pythonista_startup.py (1.3K) 2017-08-12 10:16:16
[site-packages-3]$
And then you can update it via
git pull...
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ git pull
[site-packages-3]$
Added some comments as requested and I also created
blackmamba.startupfile where all the startup things are done. Then I removedpythonista_startup.pyfrom GitHub repo, so, it will not interfere with yourpythonista_startup.pyfile and did add Installation section to the readme. Just copy & paste the lines from this section to yourpythonista_startup.pyfile. This will be stable interface for BM initialization. Alsopythonista_startup.pyfile is ignored by git (.gitignore), feel free to modify it, add your custom routines,git pullwill be happy.
Yeah, I know it's akward installation process, but because I still consider it as
hack, not going to provide better way to install / update it. And I hope that some of these features will be added to the Pythonista, thus lot of things will be removed, new ones added, ...
@zrzka, sorry. I cant see what I am doing wrong. I have done this a few times so all your cmds are not there. But I think you can see from the printout below, the site-packages-3 dir is empty before I do the clone. Bu you can see I only end up with a single folder in the site-packages-3 dir after the clone.
I cant see what I am doing wrong
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
pythonista-site-packages-3 (256.0B) 2017-08-12 15:39:32
[site-packages-3]$
I had probably older StaSH. Did update StaSH and you're right, now
git clonebehaves correctly. So the clone command is:
git clone https://github.com/zrzka/pythonista-site-packages-3.git .(<- space dot at the end)
Phuket2
@zrzka , perfect, works great. Thanks for your help. You might want to make a note in your installation instructions about the trailing period. I can see you changed it, but this would trip up newbies like me.
Done :) More updates later, going to take week off :)
Phuket2
@zrzka , ok. Enjoy. Thanks again.
wolf71
cool,perfect,thanks.
and wish @omz next pythonista update can support it native.
@zrzka, hey. I know you are off right now, hope you are having a good break. But when you return could you consider having a (dedicated) key combo (cmd-something) that you can apply to invoke a given wrench menu item. Eg. Maybe the user could pass the name of the wrench item name as a param in the bm.start('StaSh'). I guess it could also be a .py filename. Although that seems a bit more on the wild side. Maybe you have some better ideas than that.
I also mention dedicated, because I think it would get infinitely more complicated to let users to map their own keys.
Anyway thanks again, still only a day or so with what you have done so far and its very useful for me with the Apple iPad Pro keyboard.
@Phuket2 thanks for the suggestions.
Open Quickly
I would like to reuse Open Quickly dialog (when I refactor it) for:
Run Quickly (search for just .pyand run it, basically emulation of open and tapping on play),
Wrench Quickly, same idea, but you can search for wrench items.
UI for mapping keys
I'm not going to provide UI for mapping keys, because it's a lot of work, which can be replaced with something more simpler. I can remove HW shortcuts registration from
bm.start, can provide something likebm.register_default_key_commands. And if you don't call this function in yourpythonista_startup.pyfile, feel free to map your own shortcuts viabm.register_key_command. Or call it and add your own afterbm.start().Shortcut for the wrench item
Just do this in your
pythonista_startup.pyfile:
#!python3
import blackmamba.startup as bm
import blackmamba.key_commands as bkc
import blackmamba.uikit as bui
def launch_wrench_item(name):
print('Wrench item: {}'.format(name))
def launch_stash():
launch_wrench_item('StaSh')
bm.start()
bkc.register_key_command(
bkc.PYTHONISTA_SCOPE_EDITOR,
'S',
bui.UIKeyModifierCommand | bui.UIKeyModifierShift,
launch_stash,
'Launch StaSh')
This maps
Cmd-Shift-Stolaunch_stash, where you can do whatever you want :)
Run Quickly (search for just
Some breaking changes pushed. Check Usage section in the readme. Summary:
external_screen.pymoved toblackmamba/experimental
blackmamba/startup.pytrashed
register_default_key_commandsintroduced inblackmamba/__init__.py
removed scope in blackmamba/key_commands.py
usage examples updated
repository renamed (pythonista-site-packages-3->blackmamba)
If you don't want to check all these changes, just update your
pythonista_startup.pycontent:
#!python3
import blackmamba as bm
bm.register_default_key_commands()
@zrzka , hey thanks. The git pull worked perfectly. Thanks for your help to get it setup correctly. Makes a huge difference being able to do that. I haven't added my own keys yet, but will put some thought into it. I am always running 'Check Style' and 'Reformat Code' these days so I am guessing, i just need to find these scripts and run them from function stubs like you do with the hallo example. Anyway, will give it a go later.
Thanks again. This is really fantastic with an external keyboard. I am sure a lot of other apps would be envious of this ability.
Oppps, sorry, I missed the post above this...Looks like the wrench keys have been handled. That's great. I will try them now!!!!
Phuket2
@Phuket2 wrench item(s) are not handled yet. It's just silly example how to print smth with keyboard shortcut. I'll try to add run script / wrench item today. Then you'll be able to use it.
@zrzka , ok. Cool. I had a lame attempt to get it working and started going down a rabbit hole. But for me it will be a big help. Esp the check styles/reformat code. @ccc has beaten me with a big stick so I don't dare to push anything anymore until I have done all the checks :) I find it super annoying and time consuming. But I am happy I am staring to take the time to do things properly. Just a matter of time it will become second nature.
Ok, will keep a look out for the next update :)
@Phuket2 I did refactor my picker, thus I was able to add Run Quickly... (Cmd Shift R) and Wrench Quickly... (Cmd Option R).
Butit does work only and only if scripts are Python 3 compatible. Otherwise you can run them, but they will fail to execute. See another thread for more info. Sorry for this, will try to solve it somehow.
It's useless for StaSh (Python 2) and maybe for many more scripts.
Another update ...
wrench_pickerrenamed toaction_picker
Wrench Quickly...renamed toAction Quickly...with new shortcutCmd-Shift-A
ide.run_actionadded (see example below)
slight Action Quickly... UI improvements
title is custom title or just script name without extension if title is not provided
subtitle is script path
... and here's an example how to register custom shortcut to launch StaSh for example ...
#!python3
import blackmamba as bm
from blackmamba.key_commands import register_key_command
from blackmamba.uikit import * # UIKeyModifier*
import blackmamba.ide as ide
bm.register_default_key_commands()
def launch_stash():
ide.run_action('StaSh') # <- editor action custom title, case sensitive
# or ide.run_script('launch_stash.py')
register_key_command('S', UIKeyModifierCommand | UIKeyModifierShift,
launch_stash, 'Launch StaSh...')
ide.run_actionaccepts editor action custom title and it's case sensitive. Another option is toignoreeditor actions and use justide.run_scriptwith script name.zrzka
Another installation method added (StaSh & Pip). Check readme. This is preferred way to install Black Mamba. The old git way still works and will work.
Hmm, StaSh & pip & GitHub doesn't support update. Hmm.
Okay, managed to create PyPI package. So, it's installable via:
cd ~/Documents
pip install blackmamba -d site-packages-3
But there's an issue with XML RPC and PyPI, see issue #264. So far, the workaround is to change line number 899 in the
site-packages/stash/bin/pip.pyfile from ...
hits = self.pypi.package_releases(pkg_name, True) # True to show all versions
... to ...
hits = self.pypi.package_releases(pkg_name, False)
This fixes the pip issue. Or at least, it looks like it does.
|
Sep 18th, 2020 - written by Kimserey with .
Few months ago we looked into Marshmallow, a Python serialisation and validation framework which can be used to translate Flask request data to SQLAlchemy model and vice versa. In today’s post we will look at how we can serialise an array containing polymorphic data.
Thanks to the flexibility of Python, it is common practice to hold different classes on the same array. Those objects could be from a derived class; for example if we had a base class Notification class where the array would contain NotificationX and NotificationY.
class Notification:
def __init__(self, id: int):
self.id = id
class NotifcationUserCreated(Notification):
def __init__(self, id: int, username: str):
super().__init__(id)
self.username = username
def __repr__(self):
return "<NotifcationUserCreated id={}, username={}".format(
self.id, self.username
)
class NotifcationQuantityUpdated(Notification):
def __init__(self, id: int, quantity: int):
super().__init__(id)
self.quantity = quantity
def __repr__(self):
return "<NotifcationQuantityUpdated id={}, quantity={}".format(
self.id, self.quantity
)
We can then have an array composed of any notification:
from faker import Faker
fake = Faker()
notifications = [
NotifcationUserCreated(1, fake.name()),
NotifcationQuantityUpdated(2, 10),
NotifcationUserCreated(3, fake.name()),
]
will result in the following notifications:
1
2
3
[<NotifcationUserCreated id=1, username=Lauren Lambert,
<NotifcationQuantityUpdated id=2, quantity=10,
<NotifcationUserCreated id=3, username=David Woods]
As we can see, we are able to mix multiple classes into the array.
In order to serialize the notifications, we can create their Marshmallow schemas:
from marshmallow import Schema
from marshmallow.fields import Int, Str
class NotifcationUserCreatedSchema(Schema):
id = Int()
username = Str()
class NotifcationQuantityUpdatedSchema(Schema):
id = Int()
quantity = Int()
But as we can see, if we try to dump using one schema, we would lose the information for notifications of the other type:
1
2
schema = NotifcationUserCreatedSchema(many=True)
schema.dump(notifications)
would result in:
1
2
3
[{'username': 'Lori Jackson', 'id': 1},
{'id': 2},
{'username': 'Samantha Clark', 'id': 3}]
Since the array contains polymorphic data, we need a way to use NotifcationUserCreatedSchema when the object is a user created notification, and use the other schema when the object is of the other type.
In order to handle the selection of the right schema, we’ll use a type_map attribute which will map from the notification type to the schema.
We first start by creating notification_type attributes on the classes:
class NotifcationUserCreated(Notification):
notification_type = "user_created"
def __init__(self, id: int, username: str):
super().__init__(id)
self.username = username
def __repr__(self):
return "<NotifcationUserCreated id={}, username={}".format(
self.id, self.username
)
class NotifcationQuantityUpdated(Notification):
notification_type = "quantity_updated"
def __init__(self, id: int, quantity: int):
super().__init__(id)
self.quantity = quantity
def __repr__(self):
return "<NotifcationQuantityUpdated id={}, quantity={}".format(
self.id, self.quantity
)
Then we create a NotificationSchema which holds a type_map mapping from the notification_type to the class schemas NotifcationUserCreatedSchema and NotifcationQuantityUpdatedSchema.
class NotificationSchema(Schema):
"""Notification schema."""
type_map = {
"user_created": NotifcationUserCreatedSchema,
"quantity_updated": NotifcationQuantityUpdatedSchema,
}
def dump(self, obj: typing.Any, *, many: bool = None):
result = []
errors = {}
many = self.many if many is None else bool(many)
if not many:
return self._dump(obj)
for idx, value in enumerate(obj):
try:
res = self._dump(value)
result.append(res)
except ValidationError as error:
errors[idx] = error.normalized_messages()
result.append(error.valid_data)
if errors:
raise ValidationError(errors, data=obj, valid_data=result)
return result
def _dump(self, obj: typing.Any):
notification_type = getattr(obj, "notification_type")
inner_schema = NotificationSchema.type_map.get(notification_type)
if inner_schema is None:
raise ValidationError(f"Missing schema for '{notification_type}'")
return inner_schema().dump(obj)
The NotificationSchema acts as the parent schema which will use the proper schema to dump the object. We override the original dump function def dump(self, obj: typing.Any, *, many: bool = None) and within it, we use the type map to instantiate the right schema use dump from that schema.
A special scenario to handle is when given many=True, the object is expected to be an array which we enumerate and consolidate the validation errors - which should only be missing schema validation errors (as dump doesn’t run validation only load).
And using this schema we can now serialize the notifications:
1
2
schema = NotificationSchema(many=True)
schema.dump(notifications)
which will result in:
1
2
3
[{'username': 'Anthony Montgomery', 'id': 1},
{'id': 2, 'quantity': 10},
{'username': 'Mr. Andrew Carter', 'id': 3}]
And we can see that we are able to serialize each notification properly!
Today we looked into serializing a polymorphic array, we started by creating a polymorphic structure example with Notifications. We then saw how to create associated Marshmallow schemas for it. And finally we looked at how we could override Marshmallow schema dump to serialize properly the array. I hope you liked this post and I see you on the next one!
|
Denkt man an Themen, die sich mangels Laborausstattung nur schwer oder gar nicht veranschaulichen lassen, kommen einem vielleicht Elektronik und ähnliche Themen in den Sinn. Dass allerdings auch ein eben mal schnell erstellter Algorithmus angesichts des Fehlens von Compiler und Co. Probleme verursachen kann, stellt man oft erst bei schärferem Nachdenken fest.
Grundlage aller Jupyter-bezogenen Projekte ist ein als IPython bezeichneter Kernel, der im Prinzip eine um Komfortfunktionen erweiterte Variante der interaktiven Ansicht in REPL (Read-Eval-Print Loop) darstellt. Hintergedanke des meist nur als Komponente anzutreffenden Diensts war es, Entwicklern ein bequem ansprechbares Interface zu REPL zu geben, über das sich neuartige Interaktionsszenarien einfach realisieren lassen.
Mittlerweile wurde dieses Interface wie in Abbildung 1 gezeigt um nicht-pythonische Programmiersprachen ergänzt.
Was ist das Notebook?
Lange Python-Programme wirken insbesondere auf Quereinsteiger, aber auch auf Entwickler, die mit Python wenig vertraut sind, nicht wirklich einladend. Ein klassisches Antipattern wäre beispielsweise die Hello-World-Applikation der YDLIDAR-Laserscanner.
Jupyter Notebooks dürfte das wahrscheinlich bekannteste Anwendungsszenario von IPython sein. Im Prinzip handelt es sich dabei um einen Server, der das Ansprechen der Logik aus einem beliebigen Webbrowser heraus erlaubt (Abb. 2).
Jupyter Notebooks beschränken sich nicht auf das Einblenden einer an Azure erinnernden Web-Console. Erzeuger des Jupyter Notebooks, die auf der Festplatte übrigens unter der Endung .ipynb gespeichert werden, dürfen auch Erklärungsnotizen anlegen. Lohn der Mühen ist eine an interaktive Lehrbücher erinnernde Webseite, in denen Entwickler bzw. Lernende mit den Arbeits- und Lehrinhalten direkt interagieren können.
Vorbereitung mit Windows 10
Wegen der mittlerweile sehr hohen Verbreitung von Jupyter steht eine Vielzahl von Installationsangeboten zur Verfügung. Wir wollen in den folgenden Schritten mit der nativen Arbeitsumgebung interagieren – der einfachste Weg zur Bereitstellung ist das Herunterladen der Anaconda-Distribution, die neben einem Python-Interpreter auch Komfortfunktionen wie Paketverwaltung und Co. bereitstellt. Besuchen Sie im ersten Schritt die Seite von Anaconda [2] und scrollen Sie bis zum Abschnitt Anaconda Installers. Der Autor entschied sich in den folgenden Schritten für die Version Python 3.7; da ein 64-Bit-Betriebssystem zum Einsatz kommt, müssen Sie die 466 MB große 64-Bit-Version des Programms auf Ihren Rechner herunterladen.
Im Rahmen der Installation fragt das Set-up-Programm, ob Anaconda zur Path-Variable hinzugefügt werden soll: Wir wollen das in den folgenden Schritten ablehnen, weil sich das Produkt auch aus dem Startmenü heraus anwerfen lässt.
Nach der erfolgreichen Installation der Anaconda-Version für Python 3.7 finden Sie im Startmenü den Eintrag Jupyter Notebooks, der den direkten Einstieg in die Arbeitsumgebung ermöglicht. Nach seiner Eingabe erscheint am Bildschirm Ihrer Workstation das in Abbildung 3 gezeigte Terminalfenster, das zum Aufrufen eines bestimmten URL auffordert.
Wer den URL – am Rechner des Autors lautete er http://localhost:8888/?token=701007f8ab81716cede6c1989fc338b6cc1f6c05ac8fe3fd – in einen Browser seiner Wahl eingibt, findet sich im ersten Schritt in einer Art Dateimanager wieder. Der Autor entschied sich in den folgenden Schritten für die Verwendung von Google Chrome. Andere Browser dürften im Allgemeinen ebenfalls funktionieren; in User-Gruppen findet man allerdings immer wieder Berichte von massiven Problemen bei der Verwendung des klassischen Internet Explorer. Angemerkt sei noch, dass das hier verwendete lokale Ausführen von sowohl Server als auch Client nur ein Weg zum Ziel ist. Es ist erlaubt, Client und Server auf verschiedenen Maschinen auszuführen oder auf die unter [3] zur Verfügung stehenden Webdienste zu setzen.
Bei einem direkten Anwerfen des Servers aus dem Startmenü ist das Files-Tab von Haus aus auf den Inhalt des Verzeichnisses des Laufwerks C: eingestellt. Sie können einzelne Ordner oder Dateien anklicken, um mit ihnen zu interagieren.
Im Running-Tab finden Sie dann eine Liste aller Jupyter-bezogenen Aufgaben, die von Ihrer Workstation zum jetzigen Zeitpunkt ausgeführt werden. Haben Sie das Produkt eben erst frisch installiert, ist dieses Tab logischerweise noch leer.
Wer erste Experimente durchführen möchte, klickt im nächsten Schritt auf den auf der Oberseite der Liste befindlichen Button New. Jupyter reagiert darauf mit der Einblendung eines Kontextmenüs, in dem sie sich für die Option Notebook | Python3 entscheiden. Nach dem erfolgreichen Start des Kernels präsentiert sich das Fenster wie in Abbildung 4 gezeigt.
Zelluläre Struktur
Jupyter Notebooks unterscheiden sich von klassischen Quellcodedateien dadurch, dass die einzelnen, in Abbildung 4 gezeigten Elemente als Zellen bezeichnet werden. Klicken Sie im ersten Schritt auf das Menü, um die Option Insert Cell Below zu aktivieren. Jupyter reagiert darauf mit der Einblendung einer weiteren Zelle.
Zur weiteren Vorführung der Möglichkeiten müssen wir die beiden Zellen mit Code beladen. In die erste Zelle platzieren wir folgende kleine numerische Anweisung:
3+3
Zelle Nummer 2 soll fürs Erste eine kleine for-Schleife aufnehmen, die Informationen über das print-Kommando nach außen trägt:
for x in range(0, 3): print("We're on time %d" % (x))
Für die Ausführung wählen wir danach die Option Cell | Run Cells, was zu dem in Abbildung 5 dargestellten Ergebnis führt, wenn die mit der for-Schleife beladene Zelle markiert ist.
Eine Ausführung der Zelle mit dem numerischen Additionsbefehl lässt sich dadurch anweisen, dass Sie ihn markieren und danach auf das Run-Symbol oder die Option Run Cells klicken. Alternativ dazu gibt es auch die Menüoption Cell | Run All, die alle im Jupyter Notebook befindlichen Zellen gleichzeitig zur Ausführung freigibt.
Ein interessanter Versuch betrifft das Anlegen einer Variablen. Hierzu wählen wir eine Zelle, in der wir folgenden Code platzieren und danach ausführen:
i=0
Da das Zuweisen eines Wertes an eine Variable keine Konsolenausgaben auslöst, sehen wir die erfolgreiche Abarbeitung nur durch eine Inkrementierung des in eckigen Klammern angezeigten Index. Zudem bewegt sich der Fokus (normalerweise) eine Zelle weiter nach unten.
Im nächsten Schritt wählen wir deshalb eine freie Zelle, und platzieren in ihr folgenden inkrementierenden und ausgebenden Code:
i=i+2 i
Im nächsten Schritt können Sie diese Zelle mehrfach markieren und durch Anklicken von Run je einmal ausführen. Wer das einige Male wiederholt, sieht das in Abbildung 6 gezeigte Ergebnis.
Wer den im Hintergrund laufenden Kernel beenden möchte, klickt im Menü auf die Option Kernel | Restart and Clear Output. Wenn Sie die Zelle mit dem Code zur Variableninkrementierung danach nochmals markieren und zur Ausführung freigeben, bekommen Sie im Ausgabefenster statt einem numerischen Wert den Fehler: „NameError: name ‚i‘ is not defined“. Das ist logisch und richtig, weil das Neustarten des Kernels alle in ihm gehaltenen Zustandsinformationen zurückgesetzt hat.
Wer das neu angelegte Notebook in einem separaten Tab geöffnet hat, kann an dieser Stelle auf die Jupyter-Startseite zurückkehren und dort den Tab Running selektieren. Unser neu angelegtes Notebook erscheint in der Rubrik Notebooks, wo es sich durch Anklicken der Shut Down-Option herunterfahren lässt.
Elemente anreichern
Bis hierher ist das Jupyter Notebook eine im Webbrowser lebende Python-Kommandozeile: Wer einen Editor wie Visual Studio Code komfortabel konfiguriert, hat beim reinen Ausführen vom Quellcode ein ähnliches Erlebnis. Die Vorteile von Jupyter Notebooks kann das Produkt erst dann ausspielen, wenn man neben den Codezellen auch fortgeschrittene Funktionen einsetzt.
Der mit Abstand häufigste Kandidat ist das Markdown-Format. Erzeugen Sie zu seiner Demonstration im ersten Schritt eine neue Zelle, und klicken Sie danach im Menü auf die Option Cell | Cell Type | Markdown. Nach dem Festlegen der Option verschwindet der []-Callout aus dem Header – das weist darauf hin, dass der Inhalt dieser Zelle nicht durch den IPython-Kernel, sondern durch einen HTML-Konverter bearbeitet wird.
Zur Demonstration des Handlings platzieren wir im nächsten Schritt folgende Syntax, die neben fett gedruckten und kursiven Text auch das Anlegen einer Überschrift der dritten Klasse demonstriert:
### Heading dritter Klasse **bold text** *italicized text* Normaler Text
Nach dem Deselektieren, aber auch schon während der Eingabe der Syntax sehen Sie, dass Jupyter Notebooks wie in Abbildung 7 gezeigt eine Syntax-Highlighting-Unterstützung anbietet.
Wer den Inhalt der Zelle danach selektiert und auf Run klickt, stellt fest, dass die gesamte Zelle durch den in Abbildung 8 gezeigten Output ersetzt wird. Das lässt sich dadurch rückgängig machen, dass man den Output doppelt anklickt – Jupyter Notebook wechselt in diesem Fall automatisch wieder in den Bearbeitungsmodus.
Markdown-Zellen kann das Anlegen von Dokumentationskommentaren ermöglichen. In der Praxis ist das allerdings nur ein kleiner Teil der Lösung.
Anschaulich lernen
Die langjährige Lehrerfahrung des Autors zeigt, dass der Einfluss von Parametern Lernenden am einfachsten dadurch sichtbar gemacht werden kann, wenn sie mit dem betreffenden Wert spielen können. Ein gutes Beispiel dafür wäre das in Abbildung 9 gezeigte Schirmbild eines DPOs. Die Linie ist dabei der Triggerwert, der die Welle zwecks Stabilisierung der Anzeige „touchieren“ muss. Der einfachste Weg, um dieses Verfahren Lernenden klarzumachen, besteht darin, die Linie durch Nutzung des Trigger-Level-Rheostats über den Bildschirm verschieben zu lassen. Sieht man, dass die Welle nun steht, tritt umgehend ein positiver Lerneffekt ein.
Erfordert das Spielen mit Parametern manuelle Anpassungen des Quellcodes sowie eine Rekompilation, lehrt die Erfahrung, dass die direkte Feedbackschleife unterbrochen wird und die Lerneffizienz rapide abnimmt. Eine effizientere Maßnahme wäre das Einbinden von Steuerelementen wie beispielsweise eines Schiebereglers, der das „direkte“ Beeinflussen der Parameter erlaubt. In der Welt von Jupyter wird das über die ipywidgets-Bibliothek realisiert, deren grundlegender Aufbau in Abbildung 10 gezeigt ist.
Auf Kernelebene liegt dabei eine Widget-Klasse, die man in der Data-Binding-Welt von XAML und Co. als Modell bezeichnen würde. Im Frontend findet sich eine Manipulationsklasse, die einerseits die im Modell lebenden Werte justiert und sich andererseits darum kümmert, die in den Browsern angezeigten Steuerelemente zu aktualisieren.
Im Fortgang wollen wir mit einem neuen Notebook weitermachen. Klicken Sie deshalb auf File | New Notebook | Python 3, um ein weiteres Notebook zu erzeugen. Das vorherige Speichern der schon geöffneten Arbeitsmappe ist übrigens nicht erforderlich, da der Jupyter-Notebook-Server automatisch einen neuen Namen zuweist.
Für erste Gehversuche benötigen wir zwei Zellen. Die erste davon statten wir mit folgendem Code aus:
import ipywidgets as widgets tamswidget = widgets.FloatSlider()
Da der im Hintergrund des Notebooks lebende Kernel eine mehr oder weniger gewöhnliche Python Runtime darstellt, ist der Autor nicht von der Verwendung von import-Deklarationen befreit. Nach dem Laden des relevanten Moduls rufen wir den Konstruktor auf, um eine Instanz eines mit Float-Werten parametrisierten Schiebers zu erzeugen und in die Variable TamsWidget zu speichern.
In der zweiten Zelle setzen wir im ersten Schritt einen Verweis auf die display-Methode, um ihr daraufhin die weiter oben erzeugte Instanz zuzuweisen:
from IPython.display import display display(tamswidget)
Bei Betrachtung des weiter oben im Diagramm gezeigten architektonischen Aufbaus müsste an dieser Stelle ein Float Slider entstehen, der seine Informationen aus dem Modell bezieht.
Zur Verifikation dieses Zusammenhangs klicken wir im Menü auf die Option Run All, und sehen daraufhin auch tatsächlich in der zweiten Zelle einen Slider, dessen Wert von Haus aus von 0 bis 100 läuft.
Um das Beispiel spielerisch zu halten, erzeugen wir daraufhin weiter unten noch eine weitere Zelle, die nun aber nur noch den Anzeigecode übergeben bekommt. Führen Sie danach alles über das Run-Symbol in der Toolbar aus, um sich am Aufscheinen eines weiteren Sliders zu erfreuen. Veränderungen des Werts des einen Sliders führen dann dazu, dass der Wert der anderen Instanz ebenfalls angepasst wird. Ein abermaliger Import der Klasse ist übrigens nicht erforderlich, weil unser Kernel – analog zu den weiter oben durchgeführten Experimenten mit Variablen – auch in Bezug auf die Importe zustandsbehaftet ist.
Die Data Binding Engine von Jupyter zeigt sich auch sonst sehr flexibel. In der Dokumentation findet sich beispielsweise folgendes Element, das eine Textbox und einen Slider über eine vom Entwickler manuell ins Leben gerufene Data-Binding-Beziehung miteinander verbindet:
a = widgets.FloatText() b = widgets.FloatSlider() display(a,b) mylink = widgets.jslink((a, 'value'), (b, 'value'))
Datenübertragung an Diagramme und Variablen
Die Grenzen des Data Bindings lassen sich durch eine Zelle anzeigen, die als Inhalt print(tamswidget.value) eingeschrieben bekommt. Wer den Inhalt der Zelle erstmals zur Ausführung freigibt, bekommt wie weiter oben bei der Ausgabe der Variable den aktuellen Wert des Modells angezeigt. Spätere Änderungen des Werts beeinflussen die Anzeige der print-Zelle allerdings nicht mehr, weil Jupyter nicht in der Lage ist, die Abhängigkeit der Werte voneinander zu erkennen.
Jupyter trifft hierbei auf ein Problem, das Nutzern von JavaScript Frameworks hinreichend bekannt vorkommen dürfte. So gut wie alle Programmiersprachen statten ihre grundlegenden Variablen im Interesse besserer Systemperformance nicht mit einem Notification Handler aus, weshalb zur Überwachung der Variablenwerte Handarbeit erforderlich ist. Im Fall von Jupyter Notebook müssen wir den primitiven Aufruf von print im ersten Schritt in eine Funktion verpacken, die über das Schlüsselwort def eingeleitet wird:
def on_change(v): print(v['new'])
Geben Sie den Code in die Zelle ein und klicken Sie danach auf das Run-Symbol. Wundern Sie sich nicht darüber, wenn das Notebook keine grafische Ausgabe präsentiert – die Funktion ist nun Teil des Zustands geworden und kann in der darauffolgenden Zelle nach dem folgenden Schema mit dem on_change-Ereignis Kontakt aufnehmen:
tamswidget.observe(on_change, names='value')
Klicken Sie auch in dieser Zelle auf Run, um eine Beziehung herzustellen. Ab diesem Moment können Sie nach Belieben die Widget-Werte verändern, was zum in Abbildung 11 gezeigten Verhalten führt. Beachten Sie allerdings, dass die Werteliste sehr schnell anwächst, und bald den Bildschirm füllt.
Darstellung von Funktionen
Das Python-Ökosystem hat mit der Matplotlib ein interessantes Alleinstellungsmerkmal – wer sich einmal in die Möglichkeiten der Diagrammerzeugung mit Matplotlib eingearbeitet hat, will das Produkt nicht mehr missen. Aus der Logik folgt, dass man in Jupyter Notebook ebenfalls Matplotlib-Diagramme erstellen kann. Als erstes und einfachstes Beispiel greifen wir uns eine neue leere Zelle, in der wir folgenden Code platzieren:
%matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') import numpy as np fig = plt.figure() ax = plt.axes() x = np.linspace(0, 10, 1000) ax.plot(x, np.sin(x));
Im Prinzip handelt es sich dabei um eine bekannte Kombination aus Matplotlib und NumPy. Neuartig ist in diesem Zusammenhang vor allem das Setzen des Umgebungsparameters, der die Matplotlib dazu anweist, statt dem hauseigenen Diagrammfenster auf die Ausgabemöglichkeiten der vorhandenen Runtime zurückzugreifen.
Wenn Sie eine mit Matplotlib ausgestattete Zelle ausführen, erscheint sofort eine leere Zelle darunter. Das Auftauchen des Diagramms, dass sich beim vorgegebenen Code wie Abbildung 12 präsentiert, nimmt derweil bis zu 5 Sekunden in Anspruch.
plot nimmt dabei analog zum vollwertigen Python mehr oder weniger beliebige Funktionen entgegen. Ein dankbarer Kandidat für eine erste Visualisierung wäre eine lineare Funktion, die sich vom Ursprung entfernt. Hierzu müssen wir eine neue Zelle erstellen, deren plot-Methode folgenden Korpus aufweist:
%matplotlib inline import matplotlib.pyplot as plt . . . ax.plot(x, x+tamswidget.value)
Die Königslösung wäre nun natürlich, wenn sich das Diagramm dynamisch anpassen würde. In der Theorie könnten wir nach dem weiter oben besprochenen Schema vorgehen, und die Aufbaumethode in eine weitere Helferfunktion verpacken:
def toiler(v): plt.style.use('seaborn-whitegrid') import numpy as np fig = plt.figure() ax = plt.axes() x = np.linspace(0, 10, 1000) ax.plot(x, x+tamswidget.value)
Ein zusätzliches Element kümmert sich dann darum, zwischen der neuen Funktion und dem Widget eine Beziehung herzustellen:
display(tamswidget) tamswidget.observe(toiler, names='value')
Der abermalige Aufruf von display dient nur dazu, dass lokal, also in räumlicher Nähe der Zelle mit dem Diagramm, auch eine Instanz des Sliders zur Verfügung steht. Wer die beiden Zellen zur Ausführung bringt, hat allerdings ein unangenehmes Erlebnis. Da die Methode toiler bei jedem Aufruf eine komplett neue Instanz des Diagramms erzeugt, „läuft“ die Zelle bald über, was der Browser mit dem Einblenden einer Scroll Bar quittiert und der Übersichtlichkeit nicht sonderlich zuträglich ist.
Zur Behebung dieses Problems könnten Sie auf Funktionen der Matplotlib setzen oder fortgeschrittene Manipulationen an den Diagrammobjekten durchführen. Das ist aber eine Aufgabe, die wir hier nicht im notwendigen Umfang darstellen können.
Blick in die Zukunft
Jupyter Notebooks mögen eine geradezu faszinierende Möglichkeit zur Visualisierung naturwissenschaftlicher Prozesse darstellen. In der Praxis ist das Benutzerinterface allerdings etwas sperrig. Wer mit Visual Studio und Co. aufgewachsen ist, wünscht sich eine stärker gestreamlinte Arbeitsoberfläche. Das Jupyter-Entwicklerteam begegnet diesem Wunsch mit einem als Lab bezeichneten Produkt, das (bis zu einem gewissen Grad) der spirituelle Nachfolger der nach wie vor voll unterstützten Jupyter Notebooks ist.
Die mit Abstand wichtigste Neuerung ist dabei ein mehr oder weniger vollwertiger Editor, der die Arbeit mit den nach wie vor unterstützen Dateien erleichtert.
Zur Nutzung der neuartigen Benutzerschnittstelle müssen wir unseren Jupyter-Notebook-Server im ersten Schritt beenden. Das geht am bequemsten, indem Sie einfach das Kommandozeilenfenster schließen. Suchen Sie im Startmenü danach nach der Option Anaconda Prompt, um eine mit den Anaconda-Pfaden vorgeladene Kommandozeile zu laden. Das eigentliche Anwerfen von Jupyter Labs erfolgt dann durch Eingabe des folgenden Kommandos:
(base) C:\Users\tamha>Jupyter lab [I 22:44:54.857 LabApp] JupyterLab extension loaded from C:\Users\tamha\anaconda3\lib\site-packages\jupyterlab
Analog zum „klassischen“ Jupyter Notebook gibt der Starter auch hier einen URL aus. Platzieren Sie ihn in einem Browser Ihrer Wahl, um das in Abbildung 13 gezeigte Startfenster zu öffnen.
Das Anklicken einer der beiden weiter oben verwendeten Dateien lädt diese dann in einen Tab, was die Arbeit wesentlich erleichtert. Das Anklicken des Konsolenfensters öffnet stattdessen eine REPL-Konsole, die allerdings um die Funktionen von IPython erweitert ist.
Lohnt es sich?
Ein gut gemachtes Notebook hilft immens, wenn es darum geht, die von Algorithmen repräsentierten Zusammenhänge schnell grafisch begreifbar zu machen. Wer die implementierte Infrastruktur von Hand nachzuprogrammieren sucht, hat einige Monate zu tun – die Beschäftigung mit dem Produkt zahlt sich aus. Angemerkt sei noch, dass die extrem hohe Popularität dazu führt, dass der Buchmarkt mittlerweile gut sortiert ist.
|
上次看到
彭小六大神 弄了个他自己的简书文章-关注数曲线,图如下
觉得挺有意思,于是自己弄了个爬虫简陋版,效果图如下。有需要的同学也可以自己试试,代码在下文,由于python程序封装为exe不太好用,所以就只给了源码。
横坐标为日期,纵坐标为数量。第一行为喜欢和关注曲线,第二行是每日文章数散点图。蓝色是喜欢数,红色是关注数,绿色点为文章数。下图是一个喜欢关注单张,看得更清晰点。只是看下效果,请忽略我可怜的点赞关注数。
以下为代码,关于Scrapy安装参看新手向爬虫(三)别人的爬虫在干啥,还需要 matplotlib 和 seaborn库(样式美化)用于画图 ,使用
pip install matplotlib和pip install seaborn安装。
Scrapy爬虫保存在
num2.py文件中,需要将代码第6行的yourcookie变量改为你自己的remember_user_token。获取:登录简书后,F12打开浏览器工具,点击network一栏,然后F5刷新,点击新出现的网络请求条目,就可以在headers的request headers中获得cookie,我们不需要所有的cookie(它们是以逗号分隔,=表示的字段),我们只需要其中的remember_user_token= xx那部分就可以。
然后在命令行使用scrapy runspider num2.py -o 2.json运行爬虫,数据保存在2.json文件中,然后新建数据后处理文件x.py,python x.py后得到结果。
Scrapy爬虫
# -*- coding: utf-8 -*-
import scrapy
# Run
# scrapy runspider num2.py -o 2.json
yourcookie = 'xxxxxxxxx' # 由浏览器F12查看cookie获取remember_user_token这一小段
class Num1Spider(scrapy.Spider):
name = "num2"
allowed_domains = ["jianshu.com"]
info_url = 'http://www.jianshu.com/notifications?all=true&page=%d'
article_url = 'http://www.jianshu.com/users/66f24f2c0f36/latest_articles?page=%d'
Cookie = {
'remember_user_token' : yourcookie
}
headers = {
"User-Agent":"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36",
}
num = 0 # 页数
def start_requests(self): # 默认的开始函数,用于提供要爬取的链接
while self.num < 30: # 我的页数小于30,比较少,共花费1.991004秒
self.num += 1
yield scrapy.Request(self.info_url % self.num,
headers = self.headers,
cookies = self.Cookie,
callback = self.parse) # 爬取消息提醒页面
yield scrapy.Request(self.article_url % self.num,
headers = self.headers,
cookies = self.Cookie,
callback = self.article_parse) # 爬取最新文章页面
def parse(self, response):
time = response.css('li .time::text').extract()
token = response.css('li span + i').extract()
for t,k in zip(time, token):
if 'fa-heart' in k:
yield {'time': t, 'token': 'heart'}
elif 'fa-check' in k:
yield {'time': t, 'token': 'check'}
else:
pass
# 统一数据产出格式为 {'time': t, 'token': 'x'}
def article_parse(self, response):
# from scrapy.shell import inspect_response
# inspect_response(response, self)
for t in response.css('.time::attr("data-shared-at")').extract():
if not t : break
yield {'time': t.replace('T', ' '), 'token': 'article'}
数据后处理
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
import seaborn as sns
#一旦导入了seaborn,matplotlib的默认作图风格就会被覆盖成seaborn的格式
import json
from collections import defaultdict
import datetime
like = defaultdict(int) # 生成整型默认字典,即字典键的默认值为0
focus = defaultdict(int)
article = defaultdict(int)
with open('2.json','r') as f: # 解析json,统计数据,即每天有多少喜欢,关注和文章
data = json.load(f)
for i in data:
time = i['time'].split(' ')[0]
if i['token'] == 'heart':
like[time] += 1
elif i['token'] == 'check':
focus[time] += 1
elif i['token'] == 'article':
article[time] += 1
#datetime.datetime.strptime(x, '%Y-%m-%d')
for i,c in zip([like, focus, article], ['b-', 'r--', 'go']):
i = sorted(i.items(), key = lambda x : x[0]) # 将字典变为列表并按日期排序
x = [datetime.datetime.strptime(i[0],'%Y-%m-%d') for i in i] # 从字符串'2016-10-22'到datetime标准日期格式生成横轴
y = [i[1] for i in i] # 生成纵轴
plt.plot_date(x, y, c)
plt.savefig('2.jpg',dpi=300) # 保存图片,分辨率设置为尚可
plt.show()
|
View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook
このチュートリアルでは、pandas のDataFrameをロードして、tf.data.Dataset にデータを読み込む例を示します。
このチュートリアルは、クリーブランドクリニック財団(the Cleveland Clinic Foundation for Heart Disease)から提供された、小さな データセット を使っています。このデータセット(CSV)には数百行のデータが含まれています。行は各患者を、列はさまざまな属性を表しています。
このデータを使って、患者が心臓病を罹患しているかどうかを判別予測することができます。なお、これは二値分類問題になります。
pandas を使ってデータを読み込む
import pandas as pd
import tensorflow as tf
heart データセットを含んだCSVをダウンロードします。
csv_file = tf.keras.utils.get_file('heart.csv', 'https://storage.googleapis.com/applied-dl/heart.csv')
Downloading data from https://storage.googleapis.com/applied-dl/heart.csv 16384/13273 [=====================================] - 0s 0us/step
pandas を使ってCSVを読み込みます。
df = pd.read_csv(csv_file)
df.head()
df.dtypes
age int64 sex int64 cp int64 trestbps int64 chol int64 fbs int64 restecg int64 thalach int64 exang int64 oldpeak float64 slope int64 ca int64 thal object target int64 dtype: object
dataframe 内で唯一の object 型である thal 列を離散値に変換します。
df['thal'] = pd.Categorical(df['thal'])
df['thal'] = df.thal.cat.codes
df.head()
tf.data.Dataset を使ってデータをロードする
tf.data.Dataset.from_tensor_slices メソッドを使って、pandas の dataframeから値を読み込みます。
target = df.pop('target')
dataset = tf.data.Dataset.from_tensor_slices((df.values, target.values))
for feat, targ in dataset.take(5):
print ('Features: {}, Target: {}'.format(feat, targ))
Features: [ 63. 1. 1. 145. 233. 1. 2. 150. 0. 2.3 3. 0. 2. ], Target: 0 Features: [ 67. 1. 4. 160. 286. 0. 2. 108. 1. 1.5 2. 3. 3. ], Target: 1 Features: [ 67. 1. 4. 120. 229. 0. 2. 129. 1. 2.6 2. 2. 4. ], Target: 0 Features: [ 37. 1. 3. 130. 250. 0. 0. 187. 0. 3.5 3. 0. 3. ], Target: 0 Features: [ 41. 0. 2. 130. 204. 0. 2. 172. 0. 1.4 1. 0. 3. ], Target: 0
pd.Series は __array__ プロトコルを実装しているため、np.array や tf.Tensor を使うところでは、だいたいどこでも使うことができます。
tf.constant(df['thal'])
<tf.Tensor: shape=(303,), dtype=int8, numpy= array([2, 3, 4, 3, 3, 3, 3, 3, 4, 4, 2, 3, 2, 4, 4, 3, 4, 3, 3, 3, 3, 3, 3, 4, 4, 3, 3, 3, 3, 4, 3, 4, 3, 4, 3, 3, 4, 2, 4, 3, 4, 3, 4, 4, 2, 3, 3, 4, 3, 3, 4, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, 4, 4, 3, 3, 4, 4, 2, 3, 3, 4, 3, 4, 3, 3, 4, 4, 3, 3, 4, 4, 3, 3, 3, 3, 4, 4, 4, 3, 3, 4, 3, 4, 4, 3, 4, 3, 3, 3, 4, 3, 4, 4, 3, 3, 4, 4, 4, 4, 4, 3, 3, 3, 3, 4, 3, 4, 3, 4, 4, 3, 3, 2, 4, 4, 2, 3, 3, 4, 4, 3, 4, 3, 3, 4, 2, 4, 4, 3, 4, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 3, 3, 3, 4, 3, 4, 3, 4, 3, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 3, 3, 3, 3, 3, 3, 3, 3, 4, 3, 4, 3, 2, 4, 4, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 2, 2, 4, 3, 4, 2, 4, 3, 3, 4, 3, 3, 3, 3, 4, 3, 4, 3, 4, 2, 2, 4, 3, 4, 3, 2, 4, 3, 3, 2, 4, 4, 4, 4, 3, 0, 3, 3, 3, 3, 1, 4, 3, 3, 3, 4, 3, 4, 3, 3, 3, 4, 3, 3, 4, 4, 4, 4, 3, 3, 4, 3, 4, 3, 4, 4, 3, 4, 4, 3, 4, 4, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 3, 2, 4, 4, 4, 4], dtype=int8)>
データをシャッフルしてバッチ処理を行います。
train_dataset = dataset.shuffle(len(df)).batch(1)
モデルを作成して訓練する
def get_compiled_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
return model
model = get_compiled_model()
model.fit(train_dataset, epochs=15)
Epoch 1/15 WARNING:tensorflow:Layer dense is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because its dtype defaults to floatx. If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2. To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor. 303/303 [==============================] - 1s 2ms/step - loss: 9.6765 - accuracy: 0.6139 Epoch 2/15 303/303 [==============================] - 1s 2ms/step - loss: 0.6386 - accuracy: 0.7096 Epoch 3/15 303/303 [==============================] - 1s 2ms/step - loss: 0.6770 - accuracy: 0.7129 Epoch 4/15 303/303 [==============================] - 1s 2ms/step - loss: 0.5383 - accuracy: 0.7228 Epoch 5/15 303/303 [==============================] - 1s 2ms/step - loss: 0.5702 - accuracy: 0.7789 Epoch 6/15 303/303 [==============================] - 1s 2ms/step - loss: 0.5445 - accuracy: 0.7756 Epoch 7/15 303/303 [==============================] - 1s 2ms/step - loss: 0.5119 - accuracy: 0.7756 Epoch 8/15 303/303 [==============================] - 1s 2ms/step - loss: 0.6449 - accuracy: 0.7393 Epoch 9/15 303/303 [==============================] - 1s 2ms/step - loss: 0.4893 - accuracy: 0.7756 Epoch 10/15 303/303 [==============================] - 1s 2ms/step - loss: 0.4827 - accuracy: 0.7855 Epoch 11/15 303/303 [==============================] - 1s 2ms/step - loss: 0.5166 - accuracy: 0.7525 Epoch 12/15 303/303 [==============================] - 1s 2ms/step - loss: 0.4551 - accuracy: 0.8053 Epoch 13/15 303/303 [==============================] - 1s 2ms/step - loss: 0.4804 - accuracy: 0.8053 Epoch 14/15 303/303 [==============================] - 1s 2ms/step - loss: 0.4786 - accuracy: 0.7888 Epoch 15/15 303/303 [==============================] - 1s 2ms/step - loss: 0.4304 - accuracy: 0.7987 <tensorflow.python.keras.callbacks.History at 0x7f83c0af59e8>
特徴列の代替
inputs = {key: tf.keras.layers.Input(shape=(), name=key) for key in df.keys()}
x = tf.stack(list(inputs.values()), axis=-1)
x = tf.keras.layers.Dense(10, activation='relu')(x)
output = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model_func = tf.keras.Model(inputs=inputs, outputs=output)
model_func.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
tf.data を使うときに、pandas の DataFrame の列構造を保持する一番簡単な方法は、DataFrame を辞書型データに変換して、先頭を切り取ることです。
dict_slices = tf.data.Dataset.from_tensor_slices((df.to_dict('list'), target.values)).batch(16)
for dict_slice in dict_slices.take(1):
print (dict_slice)
({'age': <tf.Tensor: shape=(16,), dtype=int32, numpy= array([63, 67, 67, 37, 41, 56, 62, 57, 63, 53, 57, 56, 56, 44, 52, 57], dtype=int32)>, 'sex': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1], dtype=int32)>, 'cp': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([1, 4, 4, 3, 2, 2, 4, 4, 4, 4, 4, 2, 3, 2, 3, 3], dtype=int32)>, 'trestbps': <tf.Tensor: shape=(16,), dtype=int32, numpy= array([145, 160, 120, 130, 130, 120, 140, 120, 130, 140, 140, 140, 130, 120, 172, 150], dtype=int32)>, 'chol': <tf.Tensor: shape=(16,), dtype=int32, numpy= array([233, 286, 229, 250, 204, 236, 268, 354, 254, 203, 192, 294, 256, 263, 199, 168], dtype=int32)>, 'fbs': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0], dtype=int32)>, 'restecg': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([2, 2, 2, 0, 2, 0, 2, 0, 2, 2, 0, 2, 2, 0, 0, 0], dtype=int32)>, 'thalach': <tf.Tensor: shape=(16,), dtype=int32, numpy= array([150, 108, 129, 187, 172, 178, 160, 163, 147, 155, 148, 153, 142, 173, 162, 174], dtype=int32)>, 'exang': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0], dtype=int32)>, 'oldpeak': <tf.Tensor: shape=(16,), dtype=float32, numpy= array([2.3, 1.5, 2.6, 3.5, 1.4, 0.8, 3.6, 0.6, 1.4, 3.1, 0.4, 1.3, 0.6, 0. , 0.5, 1.6], dtype=float32)>, 'slope': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([3, 2, 2, 3, 1, 1, 3, 1, 2, 3, 2, 2, 2, 1, 1, 1], dtype=int32)>, 'ca': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([0, 3, 2, 0, 0, 0, 2, 0, 1, 0, 0, 0, 1, 0, 0, 0], dtype=int32)>, 'thal': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([2, 3, 4, 3, 3, 3, 3, 3, 4, 4, 2, 3, 2, 4, 4, 3], dtype=int32)>}, <tf.Tensor: shape=(16,), dtype=int64, numpy=array([0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0])>)
model_func.fit(dict_slices, epochs=15)
Epoch 1/15 19/19 [==============================] - 0s 3ms/step - loss: 33.5475 - accuracy: 0.2739 Epoch 2/15 19/19 [==============================] - 0s 3ms/step - loss: 14.4634 - accuracy: 0.3135 Epoch 3/15 19/19 [==============================] - 0s 3ms/step - loss: 2.5373 - accuracy: 0.6667 Epoch 4/15 19/19 [==============================] - 0s 3ms/step - loss: 1.7439 - accuracy: 0.7228 Epoch 5/15 19/19 [==============================] - 0s 3ms/step - loss: 1.5952 - accuracy: 0.7360 Epoch 6/15 19/19 [==============================] - 0s 3ms/step - loss: 1.6009 - accuracy: 0.7261 Epoch 7/15 19/19 [==============================] - 0s 3ms/step - loss: 1.5803 - accuracy: 0.7261 Epoch 8/15 19/19 [==============================] - 0s 3ms/step - loss: 1.5652 - accuracy: 0.7261 Epoch 9/15 19/19 [==============================] - 0s 3ms/step - loss: 1.5536 - accuracy: 0.7261 Epoch 10/15 19/19 [==============================] - 0s 3ms/step - loss: 1.5390 - accuracy: 0.7261 Epoch 11/15 19/19 [==============================] - 0s 3ms/step - loss: 1.5240 - accuracy: 0.7261 Epoch 12/15 19/19 [==============================] - 0s 3ms/step - loss: 1.5087 - accuracy: 0.7261 Epoch 13/15 19/19 [==============================] - 0s 3ms/step - loss: 1.4926 - accuracy: 0.7261 Epoch 14/15 19/19 [==============================] - 0s 3ms/step - loss: 1.4762 - accuracy: 0.7294 Epoch 15/15 19/19 [==============================] - 0s 3ms/step - loss: 1.4593 - accuracy: 0.7327 <tensorflow.python.keras.callbacks.History at 0x7f83c0af0cf8>
|
方法一:
pip install netaddr
from netaddr import *
mac = EUI(0x05056bdafc)
print (mac)
mac.dialect = mac_unix
print (mac)
方法二:
from pysnmp.smi import builder
mibBuilder = builder.MibBuilder()
MacAddress, = mibBuilder.importSymbols('SNMPv2-TC', 'MacAddress')
macAddress = MacAddress(hexValue='05056bdafc'.zfill(12))
macAddress.prettyPrint()
方法三:
mac = '05056bdafc'.zfill(12)
':'.join([ mac[i:i+2] for i in range(0, 12, 2) ])
|
Black Mamba - open quickly and other shortcuts
Hi all,
I mainly work with Pythonista on the biggest iPad with external keyboard. And I miss some features like Open quickly, registering shortcuts for my scripts, ... so I decided to add them by myself. Hope it will be natively supported in the future.
Here's screen recording of what I can do with external keyboard only. If you would like to try it, feel free to clone
allfiles from pythonista-site-packages-3 repository to your localsite-packages-3folder. Just download them or usegitvia StaSH.
Then you can hit ...
Cmd /- to toggle comments
Cmd N- to open new tab and to show new file dialog
Cmd Shift N- to open just new tab
Cmd 0(zero) - to toggle library view (I call it navigator)
Cmd W- to close current tab
Cmd Shift W- to close all tabs except current one
Cmd O(letter o) - to quickly open files
If you need more shortcuts for more actions, just let me know and I'll try to add them.
WARNINGIt works, but it'sexperimental,dangerous. There's some swizzling, some direct calls to ObjC instances, passing guessed parameter values, etc. It can crash your Pythonista, you can lost data, ... I warned you :) If you will modify this code and Pythonista will crash during startup, just openpythonista3://in your browser and fix your changes.
Will write more issues, so, Pythonista modules can be enhanced (like
editormodule functions to close tab, open file, etc.). Then I can remove some of these crazy calls.
Back to open quickly ... It just supports Python (
.py) and Markdown (.md) files. That's because I have no clueyetwhat should I pass as editor type, when force reload should be used, ... Also some directories are excluded, so, you know why if some files are missing in open quickly dialog. Also the open quickly dialog has hidden title bar -> no close button -> you need to hit Esc (or Ctrl [ on smart keyboard) to close it.
Anyway, it's pretty scary, fascinating, ... that we can do all these things directly on iPad. Many thanks to Ole for this wonderful tool. This tool let me play with Python on iPad and I can drive our AWS services directly from iPad as well. I'm not forced to bring MBP everywhere, iPad is enough :) Thanks again.
Enjoy!
zrzka
Added some comments as requested and I also created
blackmamba.startupfile where all the startup things are done. Then I removedpythonista_startup.pyfrom GitHub repo, so, it will not interfere with yourpythonista_startup.pyfile and did add Installation section to the readme. Just copy & paste the lines from this section to yourpythonista_startup.pyfile. This will be stable interface for BM initialization. Alsopythonista_startup.pyfile is ignored by git (.gitignore), feel free to modify it, add your custom routines,git pullwill be happy.
Yeah, I know it's akward installation process, but because I still consider it as
hack, not going to provide better way to install / update it. And I hope that some of these features will be added to the Pythonista, thus lot of things will be removed, new ones added, ...
@zrzka, sorry. I cant see what I am doing wrong. I have done this a few times so all your cmds are not there. But I think you can see from the printout below, the site-packages-3 dir is empty before I do the clone. Bu you can see I only end up with a single folder in the site-packages-3 dir after the clone.
I cant see what I am doing wrong
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
pythonista-site-packages-3 (256.0B) 2017-08-12 15:39:32
[site-packages-3]$
I had probably older StaSH. Did update StaSH and you're right, now
git clonebehaves correctly. So the clone command is:
git clone https://github.com/zrzka/pythonista-site-packages-3.git .(<- space dot at the end)
Phuket2
@zrzka , perfect, works great. Thanks for your help. You might want to make a note in your installation instructions about the trailing period. I can see you changed it, but this would trip up newbies like me.
Done :) More updates later, going to take week off :)
Phuket2
@zrzka , ok. Enjoy. Thanks again.
wolf71
cool,perfect,thanks.
and wish @omz next pythonista update can support it native.
@zrzka, hey. I know you are off right now, hope you are having a good break. But when you return could you consider having a (dedicated) key combo (cmd-something) that you can apply to invoke a given wrench menu item. Eg. Maybe the user could pass the name of the wrench item name as a param in the bm.start('StaSh'). I guess it could also be a .py filename. Although that seems a bit more on the wild side. Maybe you have some better ideas than that.
I also mention dedicated, because I think it would get infinitely more complicated to let users to map their own keys.
Anyway thanks again, still only a day or so with what you have done so far and its very useful for me with the Apple iPad Pro keyboard.
@Phuket2 thanks for the suggestions.
Open Quickly
I would like to reuse Open Quickly dialog (when I refactor it) for:
Run Quickly (search for just .pyand run it, basically emulation of open and tapping on play),
Wrench Quickly, same idea, but you can search for wrench items.
UI for mapping keys
I'm not going to provide UI for mapping keys, because it's a lot of work, which can be replaced with something more simpler. I can remove HW shortcuts registration from
bm.start, can provide something likebm.register_default_key_commands. And if you don't call this function in yourpythonista_startup.pyfile, feel free to map your own shortcuts viabm.register_key_command. Or call it and add your own afterbm.start().Shortcut for the wrench item
Just do this in your
pythonista_startup.pyfile:
#!python3
import blackmamba.startup as bm
import blackmamba.key_commands as bkc
import blackmamba.uikit as bui
def launch_wrench_item(name):
print('Wrench item: {}'.format(name))
def launch_stash():
launch_wrench_item('StaSh')
bm.start()
bkc.register_key_command(
bkc.PYTHONISTA_SCOPE_EDITOR,
'S',
bui.UIKeyModifierCommand | bui.UIKeyModifierShift,
launch_stash,
'Launch StaSh')
This maps
Cmd-Shift-Stolaunch_stash, where you can do whatever you want :)
Run Quickly (search for just
Some breaking changes pushed. Check Usage section in the readme. Summary:
external_screen.pymoved toblackmamba/experimental
blackmamba/startup.pytrashed
register_default_key_commandsintroduced inblackmamba/__init__.py
removed scope in blackmamba/key_commands.py
usage examples updated
repository renamed (pythonista-site-packages-3->blackmamba)
If you don't want to check all these changes, just update your
pythonista_startup.pycontent:
#!python3
import blackmamba as bm
bm.register_default_key_commands()
@zrzka , hey thanks. The git pull worked perfectly. Thanks for your help to get it setup correctly. Makes a huge difference being able to do that. I haven't added my own keys yet, but will put some thought into it. I am always running 'Check Style' and 'Reformat Code' these days so I am guessing, i just need to find these scripts and run them from function stubs like you do with the hallo example. Anyway, will give it a go later.
Thanks again. This is really fantastic with an external keyboard. I am sure a lot of other apps would be envious of this ability.
Oppps, sorry, I missed the post above this...Looks like the wrench keys have been handled. That's great. I will try them now!!!!
Phuket2
@Phuket2 wrench item(s) are not handled yet. It's just silly example how to print smth with keyboard shortcut. I'll try to add run script / wrench item today. Then you'll be able to use it.
@zrzka , ok. Cool. I had a lame attempt to get it working and started going down a rabbit hole. But for me it will be a big help. Esp the check styles/reformat code. @ccc has beaten me with a big stick so I don't dare to push anything anymore until I have done all the checks :) I find it super annoying and time consuming. But I am happy I am staring to take the time to do things properly. Just a matter of time it will become second nature.
Ok, will keep a look out for the next update :)
@Phuket2 I did refactor my picker, thus I was able to add Run Quickly... (Cmd Shift R) and Wrench Quickly... (Cmd Option R).
Butit does work only and only if scripts are Python 3 compatible. Otherwise you can run them, but they will fail to execute. See another thread for more info. Sorry for this, will try to solve it somehow.
It's useless for StaSh (Python 2) and maybe for many more scripts.
Another update ...
wrench_pickerrenamed toaction_picker
Wrench Quickly...renamed toAction Quickly...with new shortcutCmd-Shift-A
ide.run_actionadded (see example below)
slight Action Quickly... UI improvements
title is custom title or just script name without extension if title is not provided
subtitle is script path
... and here's an example how to register custom shortcut to launch StaSh for example ...
#!python3
import blackmamba as bm
from blackmamba.key_commands import register_key_command
from blackmamba.uikit import * # UIKeyModifier*
import blackmamba.ide as ide
bm.register_default_key_commands()
def launch_stash():
ide.run_action('StaSh') # <- editor action custom title, case sensitive
# or ide.run_script('launch_stash.py')
register_key_command('S', UIKeyModifierCommand | UIKeyModifierShift,
launch_stash, 'Launch StaSh...')
ide.run_actionaccepts editor action custom title and it's case sensitive. Another option is toignoreeditor actions and use justide.run_scriptwith script name.zrzka
Another installation method added (StaSh & Pip). Check readme. This is preferred way to install Black Mamba. The old git way still works and will work.
Hmm, StaSh & pip & GitHub doesn't support update. Hmm.
Okay, managed to create PyPI package. So, it's installable via:
cd ~/Documents
pip install blackmamba -d site-packages-3
But there's an issue with XML RPC and PyPI, see issue #264. So far, the workaround is to change line number 899 in the
site-packages/stash/bin/pip.pyfile from ...
hits = self.pypi.package_releases(pkg_name, True) # True to show all versions
... to ...
hits = self.pypi.package_releases(pkg_name, False)
This fixes the pip issue. Or at least, it looks like it does.
I give up smoking last night and changed to vamping instead. Maybe this was not a good week to do that :)
|
Crash on OpenBSD: tor invoked from Tor Browser 6.0.4
While testing an update to the (proposed) TBB port for OpenBSD both I and my partner in torbsd.crime were able to get the instance of tor started by TBB to dump core, but not reliably.
We're using tor 0.2.8.7 under OpenBSD-current (Sept 5 snapshot). I've built myself a package for amd64 from the OpenBSD port with debugging symbols, so I can see what's going on. Under -current you do:
$ cd /usr/ports/net/tor
$ env DEBUG="-ggdb -O0" INSTALL_STRIP= make repackage
and install the resulting /usr/ports/packages/amd64/all/tor-0.2.8.7.tgz package.
Other than that I made no changes to tor itself. The core dump happened both with the standard package (no debug syms) and my package with debug syms.
We die in nodelist.c:836 at the call to the SL_ADD_NEW_IPV6_AP() macro because node->rs appears to be an invalid pointer (node->ri is fine):
(gdb) where
#0 0x000013438bc334a2 in tor_addr_family (a=0x1345c7c3ff58) at address.h:155
#1 0x000013438bc3501c in tor_addr_is_null (addr=0x1345c7c3ff58)
at src/common/address.c:871
#2 0x000013438bc3526e in tor_addr_is_valid (addr=0x1345c7c3ff58,
for_listening=0) at src/common/address.c:932
#3 0x000013438bb1575f in node_get_all_orports (node=0x1345c21f6000)
at src/or/nodelist.c:836
#4 0x000013438bc29a20 in node_is_a_configured_bridge (node=0x1345c21f6000)
at src/or/entrynodes.c:1871
#5 0x000013438bc2b74a in any_bridge_supports_microdescriptors ()
at src/or/entrynodes.c:2486
#6 0x000013438bb0d2ef in we_use_microdescriptors_for_circuits (
options=0x134681d2f7a0) at src/or/microdesc.c:924
#7 0x000013438bb0d3e9 in usable_consensus_flavor () at src/or/microdesc.c:961
#8 0x000013438bb102e8 in networkstatus_consensus_is_bootstrapping (
now=1473280922) at src/or/networkstatus.c:1249
#9 0x000013438bc019b2 in find_dl_schedule (dls=0x13438c0185d0,
options=0x134681d2f7a0) at src/or/directory.c:3732
#10 0x000013438bc020d0 in download_status_reset (dls=0x13438c0185d0)
at src/or/directory.c:3950
#11 0x000013438bb114bc in networkstatus_set_current_consensus (
consensus=0x13468873f000 "network-status-version 3 microdesc\nvote-status consensus\nconsensus-method 20\nvalid-after 2016-09-07 20:00:00\nfresh-until 2016-09-07 21:00:00\nvalid-until 2016-09-07 23:00:00\nvoting-delay 300 300\nclient"..., flavor=0x1345e6fb8470 "microdesc", flags=0) at src/or/networkstatus.c:1679
#12 0x000013438bbfba02 in connection_dir_client_reached_eof (
conn=0x1346506c2500) at src/or/directory.c:2009
#13 0x000013438bbfda9a in connection_dir_reached_eof (conn=0x1346506c2500)
at src/or/directory.c:2471
#14 0x000013438bbd32e9 in connection_reached_eof (conn=0x1346506c2500)
at src/or/connection.c:4841
#15 0x000013438bbd058d in connection_handle_read_impl (conn=0x1346506c2500)
at src/or/connection.c:3526
#16 0x000013438bbd05d9 in connection_handle_read (conn=0x1346506c2500)
at src/or/connection.c:3541
#17 0x000013438bb031ec in conn_read_callback (fd=-1, event=2,
_conn=0x1346506c2500) at src/or/main.c:803
#18 0x0000134603284cbe in event_base_loop ()
from /usr/local/lib/libevent_core.so.1.1
#19 0x000013438bb06397 in run_main_loop_once () at src/or/main.c:2543
#20 0x000013438bb064da in run_main_loop_until_done () at src/or/main.c:2589
#21 0x000013438bb062b7 in do_main_loop () at src/or/main.c:2515
#22 0x000013438bb0a0e5 in tor_main (argc=16, argv=0x7f7ffffc01b8)
at src/or/main.c:3646
#23 0x000013438bb01f3f in main (argc=16, argv=0x7f7ffffc01b8)
at src/or/tor_main.c:30
(gdb) up
#1 0x000013438bc3501c in tor_addr_is_null (addr=0x1345c7c3ff58)
at src/common/address.c:871
871 switch (tor_addr_family(addr)) {
(gdb) up
#2 0x000013438bc3526e in tor_addr_is_valid (addr=0x1345c7c3ff58,
for_listening=0) at src/common/address.c:932
932 return !tor_addr_is_null(addr);
(gdb) up
#3 0x000013438bb1575f in node_get_all_orports (node=0x1345c21f6000)
at src/or/nodelist.c:836
836 SL_ADD_NEW_IPV6_AP(node->rs, ipv6_orport, sl, valid);
(gdb) print node->rs
$16 = (routerstatus_t *) 0x1345c7c3ff00
(gdb) print *node->rs
Cannot access memory at address 0x1345c7c3ff00
(gdb) print node->ri
$18 = (routerinfo_t *) 0x134596a7aa00
(gdb) print *node->ri
$19 = {cache_info = {signed_descriptor_body = 0x0, annotations_len = 73,
signed_descriptor_len = 2223,
signed_descriptor_digest = "§À[º`?ø/\023\005ò\223»Q\004\223j\204íÌ",
identity_digest = "\232h¸Z\0021\217N~\207ò\202\2009ûÕ×[\001B",
published_on = 1473266407,
extra_info_digest = "¡ce8ÃÆ]ü\204^mà *º\220\021\205¹ä",
extra_info_digest256 = "¥m\n\231\234\003\230ý\021|ã\035hÊ\025b2 0ÐÐk/\217à\233ò\235\005ÇÇî", signing_key_cert = 0x1346133eb100, ei_dl_status = {
next_attempt_at = 1473280814, n_download_failures = 0 '\0',
n_download_attempts = 0 '\0', schedule = DL_SCHED_GENERIC,
want_authority = DL_WANT_ANY_DIRSERVER,
increment_on = DL_SCHED_INCREMENT_FAILURE},
saved_location = SAVED_IN_CACHE, saved_offset = 0, routerlist_index = 0,
last_listed_as_valid_until = 0, do_not_cache = 0, is_extrainfo = 0,
extrainfo_is_bogus = 0, send_unencrypted = 0},
nickname = 0x13459bfe5820 "NYCBUG0", addr = 1114571284, or_port = 9001,
dir_port = 9030, ipv6_addr = {family = 0 '\0', addr = {dummy_ = 0,
in_addr = {s_addr = 0}, in6_addr = {__u6_addr = {
__u6_addr8 = '\0' <repeats 15 times>, __u6_addr16 = {0, 0, 0, 0, 0,
0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}}}, ipv6_orport = 0,
onion_pkey = 0x13465a3d8d20, identity_pkey = 0x134674ecf280,
onion_curve25519_pkey = 0x134643b73920, cert_expiration_time = 1473872400,
platform = 0x134643b739a0 "Tor 0.2.9.2-alpha on FreeBSD",
bandwidthrate = 10240000, bandwidthburst = 15360000,
bandwidthcapacity = 7341056, exit_policy = 0x134674ecfd40,
ipv6_exit_policy = 0x0, uptime = 3, declared_family = 0x134674ecffb0,
contact_info = 0x134643b79780 "Admin <mirror-admin AT nycbug DOT org>",
is_hibernating = 0, caches_extra_info = 0, allow_single_hop_exits = 0,
wants_to_be_hs_dir = 1, policy_is_reject_star = 1,
needs_retest_if_added = 0, supports_tunnelled_dir_requests = 1,
omit_from_vote = 0, purpose = 2 '\002'}
(gdb) print node
$20 = (const node_t *) 0x1345c21f6000
(gdb) print *node
$21 = {ht_ent = {hte_next = 0x0, hte_hash = 1201906925}, nodelist_idx = 0,
identity = "\232hZ\0021\217N~\207202\2009[\001B", md = 0x13463eac4500,
ri = 0x134596a7aa00, rs = 0x1345c7c3ff00, is_running = 1, is_valid = 1,
is_fast = 1, is_stable = 1, is_possible_guard = 1, is_exit = 0,
is_bad_exit = 0, is_hs_dir = 0, name_lookup_warned = 0, rejects_all = 0,
using_as_guard = 0, ipv6_preferred = 0, country = 5, last_reachable = 0,
last_reachable6 = 0}
I wish I had more details to offer so far that's all I have.
attila@rotfl:~ 18:$ ls -l /etc/malloc.conf
lrwxr-xr-x 1 root wheel 5 Sep 7 16:55 /etc/malloc.conf -> CFGJU
I've restarted and am hoping to cause this to occur again. Will update this ticket if I learn anything else. Bug me on IRC if you want (I'm attila on #tor-dev).
|
Контекст: у меня есть файл с ~ 44 миллионами строк. Каждый является индивидуумом с адресом в США, поэтому есть поле «Почтовый индекс». Файл является TXT, с разделителями трубы.
Из-за размера я не могу (по крайней мере на своей машине) использовать Панд для анализа. Итак, основной вопрос, который у меня есть: сколько записей (строк) для каждого отдельного почтового индекса? Я предпринял следующие шаги, но мне интересно, есть ли более быстрый и более питонский способ сделать это (кажется, что есть, я просто не знаю).
Шаг 1: Создайте набор для значений ZIP из файла:
output = set()
with open(filename) as f:
for line in f:
output.add(line.split('|')[8] # 9th item in the split string is "ZIP" value
zip_list = list(output) # List is length of 45,292
Шаг 2: Создаем список «0», такой же длины, как и первый список:
zero_zip = [0]*len(zip_list)
Шаг 3: Создаем словарь (со всеми нулями) из этих двух списков:
zip_dict = dict(zip(zip_list, zero_zip))
Шаг 4: Наконец, я снова просмотрел файл, на этот раз обновляя только что созданный dict:
with open(filename) as f:
next(f) # skip first line, which contains headers
for line in f:
zip_dict[line.split('|')[8]] +=1
Я получил конечный результат, но мне интересно, есть ли более простой способ. Спасибо всем.
2 ответа
Создание zip_dict можно заменить на defaultdict. Если вы можете просмотреть каждую строку в файле, вам не нужно делать это дважды, вы можете просто сохранить счетчик выполнения.
from collections import defaultdict
d = defaultdict(int)
with open(filename) as f:
for line in f:
parts = line.split('|')
d[parts[8]] += 1
Это просто, используя встроенный класс Counter.
from collections import Counter
with open(filename) as f:
c = Counter(line.split('|')[8] for line in f)
print(c)
Похожие вопросы
Новые вопросы
python
Python - это многопарадигмальный, динамически типизированный, многоцелевой язык программирования. Он разработан для быстрого изучения, понимания и использования, а также для обеспечения чистого и единообразного синтаксиса. Обратите внимание, что Python 2 официально не поддерживается с 01.01.2020. Тем не менее, для вопросов о Python, связанных с версией, добавьте тег [python-2.7] или [python-3.x]. При использовании варианта Python (например, Jython, PyPy) или библиотеки (например, Pandas и NumPy) включите его в теги.
|
Black Mamba - open quickly and other shortcuts
Hi all,
I mainly work with Pythonista on the biggest iPad with external keyboard. And I miss some features like Open quickly, registering shortcuts for my scripts, ... so I decided to add them by myself. Hope it will be natively supported in the future.
Here's screen recording of what I can do with external keyboard only. If you would like to try it, feel free to clone
allfiles from pythonista-site-packages-3 repository to your localsite-packages-3folder. Just download them or usegitvia StaSH.
Then you can hit ...
Cmd /- to toggle comments
Cmd N- to open new tab and to show new file dialog
Cmd Shift N- to open just new tab
Cmd 0(zero) - to toggle library view (I call it navigator)
Cmd W- to close current tab
Cmd Shift W- to close all tabs except current one
Cmd O(letter o) - to quickly open files
If you need more shortcuts for more actions, just let me know and I'll try to add them.
WARNINGIt works, but it'sexperimental,dangerous. There's some swizzling, some direct calls to ObjC instances, passing guessed parameter values, etc. It can crash your Pythonista, you can lost data, ... I warned you :) If you will modify this code and Pythonista will crash during startup, just openpythonista3://in your browser and fix your changes.
Will write more issues, so, Pythonista modules can be enhanced (like
editormodule functions to close tab, open file, etc.). Then I can remove some of these crazy calls.
Back to open quickly ... It just supports Python (
.py) and Markdown (.md) files. That's because I have no clueyetwhat should I pass as editor type, when force reload should be used, ... Also some directories are excluded, so, you know why if some files are missing in open quickly dialog. Also the open quickly dialog has hidden title bar -> no close button -> you need to hit Esc (or Ctrl [ on smart keyboard) to close it.
Anyway, it's pretty scary, fascinating, ... that we can do all these things directly on iPad. Many thanks to Ole for this wonderful tool. This tool let me play with Python on iPad and I can drive our AWS services directly from iPad as well. I'm not forced to bring MBP everywhere, iPad is enough :) Thanks again.
Enjoy!
zrzka
@zrzka, sorry. I cant see what I am doing wrong. I have done this a few times so all your cmds are not there. But I think you can see from the printout below, the site-packages-3 dir is empty before I do the clone. Bu you can see I only end up with a single folder in the site-packages-3 dir after the clone.
I cant see what I am doing wrong
[site-packages-3]$ pwd
~/Documents/site-packages-3
[site-packages-3]$ ls -la
[site-packages-3]$ git clone https://github.com/zrzka/pythonista-site-packages-3.git
[site-packages-3]$ ls -la
pythonista-site-packages-3 (256.0B) 2017-08-12 15:39:32
[site-packages-3]$
I had probably older StaSH. Did update StaSH and you're right, now
git clonebehaves correctly. So the clone command is:
git clone https://github.com/zrzka/pythonista-site-packages-3.git .(<- space dot at the end)
Phuket2
@zrzka , perfect, works great. Thanks for your help. You might want to make a note in your installation instructions about the trailing period. I can see you changed it, but this would trip up newbies like me.
Done :) More updates later, going to take week off :)
Phuket2
@zrzka , ok. Enjoy. Thanks again.
wolf71
cool,perfect,thanks.
and wish @omz next pythonista update can support it native.
@zrzka, hey. I know you are off right now, hope you are having a good break. But when you return could you consider having a (dedicated) key combo (cmd-something) that you can apply to invoke a given wrench menu item. Eg. Maybe the user could pass the name of the wrench item name as a param in the bm.start('StaSh'). I guess it could also be a .py filename. Although that seems a bit more on the wild side. Maybe you have some better ideas than that.
I also mention dedicated, because I think it would get infinitely more complicated to let users to map their own keys.
Anyway thanks again, still only a day or so with what you have done so far and its very useful for me with the Apple iPad Pro keyboard.
@Phuket2 thanks for the suggestions.
Open Quickly
I would like to reuse Open Quickly dialog (when I refactor it) for:
Run Quickly (search for just .pyand run it, basically emulation of open and tapping on play),
Wrench Quickly, same idea, but you can search for wrench items.
UI for mapping keys
I'm not going to provide UI for mapping keys, because it's a lot of work, which can be replaced with something more simpler. I can remove HW shortcuts registration from
bm.start, can provide something likebm.register_default_key_commands. And if you don't call this function in yourpythonista_startup.pyfile, feel free to map your own shortcuts viabm.register_key_command. Or call it and add your own afterbm.start().Shortcut for the wrench item
Just do this in your
pythonista_startup.pyfile:
#!python3
import blackmamba.startup as bm
import blackmamba.key_commands as bkc
import blackmamba.uikit as bui
def launch_wrench_item(name):
print('Wrench item: {}'.format(name))
def launch_stash():
launch_wrench_item('StaSh')
bm.start()
bkc.register_key_command(
bkc.PYTHONISTA_SCOPE_EDITOR,
'S',
bui.UIKeyModifierCommand | bui.UIKeyModifierShift,
launch_stash,
'Launch StaSh')
This maps
Cmd-Shift-Stolaunch_stash, where you can do whatever you want :)
Run Quickly (search for just
Some breaking changes pushed. Check Usage section in the readme. Summary:
external_screen.pymoved toblackmamba/experimental
blackmamba/startup.pytrashed
register_default_key_commandsintroduced inblackmamba/__init__.py
removed scope in blackmamba/key_commands.py
usage examples updated
repository renamed (pythonista-site-packages-3->blackmamba)
If you don't want to check all these changes, just update your
pythonista_startup.pycontent:
#!python3
import blackmamba as bm
bm.register_default_key_commands()
@zrzka , hey thanks. The git pull worked perfectly. Thanks for your help to get it setup correctly. Makes a huge difference being able to do that. I haven't added my own keys yet, but will put some thought into it. I am always running 'Check Style' and 'Reformat Code' these days so I am guessing, i just need to find these scripts and run them from function stubs like you do with the hallo example. Anyway, will give it a go later.
Thanks again. This is really fantastic with an external keyboard. I am sure a lot of other apps would be envious of this ability.
Oppps, sorry, I missed the post above this...Looks like the wrench keys have been handled. That's great. I will try them now!!!!
Phuket2
@Phuket2 wrench item(s) are not handled yet. It's just silly example how to print smth with keyboard shortcut. I'll try to add run script / wrench item today. Then you'll be able to use it.
@zrzka , ok. Cool. I had a lame attempt to get it working and started going down a rabbit hole. But for me it will be a big help. Esp the check styles/reformat code. @ccc has beaten me with a big stick so I don't dare to push anything anymore until I have done all the checks :) I find it super annoying and time consuming. But I am happy I am staring to take the time to do things properly. Just a matter of time it will become second nature.
Ok, will keep a look out for the next update :)
@Phuket2 I did refactor my picker, thus I was able to add Run Quickly... (Cmd Shift R) and Wrench Quickly... (Cmd Option R).
Butit does work only and only if scripts are Python 3 compatible. Otherwise you can run them, but they will fail to execute. See another thread for more info. Sorry for this, will try to solve it somehow.
It's useless for StaSh (Python 2) and maybe for many more scripts.
Another update ...
wrench_pickerrenamed toaction_picker
Wrench Quickly...renamed toAction Quickly...with new shortcutCmd-Shift-A
ide.run_actionadded (see example below)
slight Action Quickly... UI improvements
title is custom title or just script name without extension if title is not provided
subtitle is script path
... and here's an example how to register custom shortcut to launch StaSh for example ...
#!python3
import blackmamba as bm
from blackmamba.key_commands import register_key_command
from blackmamba.uikit import * # UIKeyModifier*
import blackmamba.ide as ide
bm.register_default_key_commands()
def launch_stash():
ide.run_action('StaSh') # <- editor action custom title, case sensitive
# or ide.run_script('launch_stash.py')
register_key_command('S', UIKeyModifierCommand | UIKeyModifierShift,
launch_stash, 'Launch StaSh...')
ide.run_actionaccepts editor action custom title and it's case sensitive. Another option is toignoreeditor actions and use justide.run_scriptwith script name.zrzka
Another installation method added (StaSh & Pip). Check readme. This is preferred way to install Black Mamba. The old git way still works and will work.
Hmm, StaSh & pip & GitHub doesn't support update. Hmm.
Okay, managed to create PyPI package. So, it's installable via:
cd ~/Documents
pip install blackmamba -d site-packages-3
But there's an issue with XML RPC and PyPI, see issue #264. So far, the workaround is to change line number 899 in the
site-packages/stash/bin/pip.pyfile from ...
hits = self.pypi.package_releases(pkg_name, True) # True to show all versions
... to ...
hits = self.pypi.package_releases(pkg_name, False)
This fixes the pip issue. Or at least, it looks like it does.
I give up smoking last night and changed to vamping instead. Maybe this was not a good week to do that :)
For those who are using
git, feel free to pull:
Basically did add more complex sample pythonista_startup.py (readme) and ability to set which folders are ignored in
Run/Open Quickly...dialogs. Now going to figure out how to publish PyPI package on iPad, left MBP at home for two days :)
|
Making your own programming language with Python
Making your own programming language with Python
Why make your own language?
When you write your own programming language, you control the entire programmer experience.
This allows you to shape exact how each aspect of your language works and how a developer interacts with it.
This allows you to make a language with things you like from other languages and none of the stuff you don't.
In addition, learning about programming language internals can help you better understand the internals of programming languages you use every day, which can make you a better programmer.
How programming languages work
Every programming language is different in the way it runs, but many consist of a couple fundamental steps: lexing and parsing.
Introduction to Lexing
Lexing is short for LEXical analysis.
The lex step is where the language takes the raw code you've written and converts it into an easily parsable structure.
This step interprets the syntax of your language and turns next into special symbols inside the language called tokens.
For example, let's say you have some code you want to parse. To keep it simple I'll use python-like syntax, but could be anything. It doesn't even have to be text.
# this is a commenta = (1 + 1)
A lexer to parse this code might do the following:
Discard all comments
Produce a token that represents a variable name
Produce left and right parenthesis tokens
Convert literals like numbers or strings to tokens
Produce tokens for nath operations like + - * /(and maybe bitwise/logical operators as well)
The lexer will take the raw code and interpret it into a list of tokens.
The lexer can also be used to insure that two pieces of code that may be different, like 1 + 1 and 1+1 are still parsed the same way.
For the code above, it might generate tokens like this:
NAME(a) EQUALS LPAREN NUMBER(1) PLUS NUMBER(2) RPAREN
Tokens can be in many forms, but the main idea here is that they are a standard and easy to parse way of representing the code.
Introduction to Parsing
The parser is the next step in the running of your language.
Now that the lexer has turned the text into consistent tokens, the parser simplifies and executes them.
Parser rules recognize a sequence of tokens and do something about them.
Let's look at a simple example for a parser with the same tokens as above.
A simple parser could just say:
If I see the GREETtoken and then aNAMEtoken, printHello,and the the name.
A more complicated parser aiming to parse the code above might have these rules, which we will explore later:
Try to classify as much code as possible as an expression. By "as much code as possible" I mean the parser will first try to consider a full mathematical operation as an expression, and then if that fails convert a single variable or number to an expression. This ensure that as much code as possible will be matched as an expression. The "expression" concept allows us to catch many patterns of tokens with one piece of code. We will use the expression in the next step.
Now that we have a concept of an expression, we can tell the parser that if it sees the tokens NAME EQUALSand then an expression, that means a variable is being assigned.
Using PLY to write your language
What is PLY?
Now that we know the basics of lexing and parsing, lets start writing some python code to do it.
PLY stands for Python Lex Yacc.
It is a library you can use to make your own programming language with python.
Lex is a well known library for writing lexers.
Yacc stands for "Yet Another Compiler Compiler" which means it compiles new languages, which are compilers themself.
This tutorial is a short example, but the PLY documentation is an amazing resource with tons of examples. I would highly recommend that you check it out if you are using PLY.
For this example, we are going to be building a simple calculator with variables. If you want to see the fully completed example, you can fork this repl: [TODO!!]
Lexing with PLY lex
Lexer tokens
Lets start our example! Fire up a new python repl and follow along with the code samples.
To start off, we need to import PLY:
from ply import lex, yacc
Now let's define our first token. PLY requires you to have a tokens list which contains every token the lexer can produce. Let's define our first token, PLUS for the plus sign:
tokens = [
'PLUS',
]
t_PLUS = r'\+'
A string that looks like r'' is special in python. The r prefix means "raw" which includes backslashes in the string. For example, to make define the string \+ in python, you could either do '\\+' or r'\+'. We are going to be using a lot of backslashes, so raw strings make things a lot easier.
But what does \+ mean?
Well in the lexer, tokens are mainly parsed using regexes.
A regex is like a special programming language specifically for matching patterns in text.
A great resource for regexes is regex101.com where you can test your regexes with syntax highlighting and see explanations of each part.
I'm going to explain the regexes included in this tutorial, but if you want to learn more you can play around with regex101 or read one of the many good regex tutorials on the internet.
The regex \+ means "match a single character +".
We have to put a backshlash before it because + normally has a special meaning in regex so we have to "escape" it to show we want to match a + literally.
We are also required to define a function that runs when the lexer encounters an error:
def t_error(t):
print(f"Illegal character {t.value[0]!r}")
t.lexer.skip(1)
This function just prints out a warning when it hits a character it doesn't recognize and then skips it (the !r means repr so it will print out quotes around the character).
You can change this to be whatever you want in your language though.
Optionally, you can define a newline token which isn't produced in the output of the lexer, but keeps track of each line.
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
Since this token is a function, we can define the regex in docstring of the function instead.
The function takes a paramater t, which is a special object representing the match that the lexer found. We can access the lexer using the the t.lexer attribute.
This function matches at least one newline character and then increases the line number by the amount that it sees. This allows the lexer to known what line number its on at all times using the lexer.lineno variable.
Now we can use the line number in our error function:
def t_error(t):
print(f"Illegal character {t.value[0]!r} on line {t.lexer.lineno}")
t.lexer.skip(1)
Let's test out the lexer!
This is just some temporary code, you don't have to know what this code does, because once we implement a parser, the parser will run the lexer for you.
lexer = lex.lex()
lexer.input('+')
for token in lexer:
print(token)
Play around with the value passed to lex.input.
You should notice that any character other than a plus sign makes the error message print out, but doesn't crash the program.
In your language, you can make it gracefully ignore lex errors like this or make it stop running by editing the t_error function.
If you add more lines to the input string, the line number in the error message should change.
More complicated tokens
Let's delete the test token add some more complicated tokens.
Replace your tokens list and the t_PLUS line with the following code:
reserved_tokens = {
'greet': 'GREET'
}
tokens = list(reserved_tokens.values()) + [
'SPACE'
]
t_SPACE = r'[ ]'
def t_ID(t):
r'[a-zA-Z_][a-zA-Z0-9_]*'
if t.value in reserved_tokens:
t.type = reserved_tokens[t.value]
else:
t.type = 'NAME'
return t
Let's explore the regex we have in the t_ID function.
This regex is more complicated that the simple ones we've used before.
First, we have [a-zA-Z_]. This is a character class in regex. It means, match any lowercase letter, uppercase letter, or underscore.
Next we have [a-zA-Z0-9_]. This is the same as above except numbers are also included.
Finally, we have *. This means "repeat the previous group or class zero to unlimited times".
Why do we structure the regex like this?
Having two separate classes makes sure that the first one must match for it to be a valid variable.
If we exclude numbers from the first class, it not only doesn't match just regular numbers, but makes sure you can't start a variable with a number.
You can still have numbers in the variable name, because they are matched by the second class of the regex.
In the code, we first have a dictionary of reserved names.
This is a mapping of patterns to the token type that they should be.
The only one we have says that greet should be mapped to the GREET token.
The code that sets up the tokens list takes all of the possible reserved token values, in this is example its just ['GREET'] and adds on ['SPACE'], giving us ['GREET', 'SPACE'] automatically!
But why do we have to do this? Couldn't we just use something like the following code?
# Don't use this code! It doesn't work!
t_GREET = r'greet'
t_SPACE = r'[ ]'
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
Actually, if we used that code, greet would never be matched! The lexer would match it with the NAME token. In order to avoid this, we define a new type of token which is a function. This function has the regex as its docstring and is passed a t paramater. This paramater has a value attribute which is the pattern matched.
The code inside this function simply checks if this value is one of the special reserved names we defined before. If it is, we set the special type attribute of the t paramter. This type controls the type of token which is produced from the pattern. When it sees the name greet, it will see greet is in the reserved names dictionary and produce a token type of GREET because that is the corresponding value in the dictionary. Otherwise, it will produce a NAME token because this is a regular variable.
This allows you to add more reserved terms easily later, its as simple as adding a value to the dictionary.
If needed, you could make also make the key of the reserved names dicitonary a regex and the match each regex against t.value in the function.
If you want to change these rules for your language, feel free!
Parsing with PLY yacc
Fair warning: Yacc can sometimes be hard to use and debug, every if you know python well.
Keep in mind, you don't have to use both lex and yacc, if you want you can just use lex and then write your own code to parse the tokens.
With that said lets get started.
Yacc basics
Before we get started, delete the lexer testing code (everything from lexer.input onward).
When we run the parser, the lexer is automatially run.
Let's add our first parser rule!
def p_hello(t):
'statement : GREET SPACE NAME'
print(list(t))
print(f"Hello, {t[3]}")
Let's break this down.
Again, we have information on the rule in the docstring.
This information is called a BNF Grammar. A statement in BNF Grammar consists of a grammar rule known as a non-terminal and terminals.
In the example above, statement is the non-terminal and GREET SPACE NAME are terminals.
The left-hand side describes what is produced by the rule, and the right-hand side describes what matches the rule.
The right hand side can also have non-terminals in it, just be careful to avoid infinite loops.
Basically, the yacc parser works by pushing tokens onto a stack, and looking at the current stack and the next token and seeing if they match any rules that it can use to simplify them. Here is a more in-depth explanation and example.
Before the above example can run, we still have to add some more code.
Just like for the lexer, the error handler is required:
def p_error(t):
if t is None: # lexer error, already handled
return
print(f"Syntax Error: {t.value!r}")
Now let's create and run the parser:
parser = yacc.yacc()
parser.parse('greet replit')
If you run this code you should see:
[None, 'greet', ' ', 'replit']
Hello, replit
The first line is the list version of the object passed to the parser function.
The first value is the statement that will be produced from the function, so it is None.
Next, we have the values of the tokens we specified in the rule.
This is where the t[3] part comes from. This is the third item in the array, which is the NAME token, so our parser prints out Hello, replit!
Note: Creating the parser tables is a relatively expensive operation, so the parser creates a file called
parsetab.pywhich it can load the parse tables from if they haven't changed.
You can change this filename by passing a kwarg into theyaccinitialization, likeparser = yacc.yacc(tabmodule='fooparsetab')
More complicated parsing: Calculator
This example is different from our running example, so I will just show a full code example and explain it.
from ply import lex, yacc
tokens = (
'NUMBER',
'PLUS', 'MINUS', 'TIMES', 'DIVIDE',
'LPAREN', 'RPAREN',
)
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_LPAREN = r'\('
t_RPAREN = r'\)'
def t_NUMBER(t):
r'\d+'
try:
t.value = int(t.value)
except ValueError:
print(f"Integer value too large: {t.value}")
t.value = 0
return t
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
def t_error(t):
print(f"Illegal character {t.value[0]!r} on line {t.lexer.lineno}")
t.lexer.skip(1)
t_ignore = ' \t'
lexer = lex.lex()
# Parsing
def p_expression_binop(t):
'''expression : expression PLUS expression
| expression MINUS expression
| expression TIMES expression
| expression DIVIDE expression'''
if t[2] == '+' : t[0] = t[1] + t[3]
elif t[2] == '-': t[0] = t[1] - t[3]
elif t[2] == '*': t[0] = t[1] * t[3]
elif t[2] == '/': t[0] = t[1] / t[3]
def p_expression_group(t):
'expression : LPAREN expression RPAREN'
t[0] = t[2]
def p_expression_number(t):
'expression : NUMBER'
t[0] = t[1]
def p_error(t):
if t is None: # lexer error
return
print(f"Syntax Error: {t.value!r}")
parser = yacc.yacc()
if __name__ == "__main__":
while True:
inp = input("> ")
print(parser.parse(inp))
First we start off with the tokens: numbers, mathematical operations, and parenthesis.
You might notice that I didn't use the reserved_tokens trick, but you can implement it if you want.
Next we have a simple number token which matches 0-9 with \d+ and then converts its value from a string to an integer.
The next code we haven't used before is t_ignore.
This variable represents a list of all characters the lexer should ignore, which is \t which means spaces and tabs.
When the lexer sees these, it will just skip them. This allows users to add spaces without it affecting the lexer.
Now we have 3 parser directives.
The first is a large one, producing an expression from 4 possible input values, one for each math operation.
Each input has an expression on either side of the math operator.
Inside this directive, we have some (pretty ugly) code that performs the correct operation based on the operation token given.
If you want to make this prettier, consider a dictionary using the python stdlib operator module.
Next, we define an expression with parenthesis around it as being the same as the expression inside.
This makes parenthesis value be substituted in for them, making them evaluate inside first.
With very little code we created a very complicated rule that can deal with nested parenthesis correctly.
Finally, we define a number as being able to be an expression, which allows a number to be used as one of the expressions in rule 1.
For a challenge, try adding variables into this calculator!
You should be able to set variables by using syntax likevarname = any_expressionand you should be able to use variables in expressions.
If you're stuck, see one solution from the PLY docs.
Thats it!
Thanks for reading! If you have questions, feel free to ask on the Replit discord's #help-and-reviews channel, or just the comments.
Have fun!
|
Black Mamba - open quickly and other shortcuts
Hi all,
I mainly work with Pythonista on the biggest iPad with external keyboard. And I miss some features like Open quickly, registering shortcuts for my scripts, ... so I decided to add them by myself. Hope it will be natively supported in the future.
Here's screen recording of what I can do with external keyboard only. If you would like to try it, feel free to clone
allfiles from pythonista-site-packages-3 repository to your localsite-packages-3folder. Just download them or usegitvia StaSH.
Then you can hit ...
Cmd /- to toggle comments
Cmd N- to open new tab and to show new file dialog
Cmd Shift N- to open just new tab
Cmd 0(zero) - to toggle library view (I call it navigator)
Cmd W- to close current tab
Cmd Shift W- to close all tabs except current one
Cmd O(letter o) - to quickly open files
If you need more shortcuts for more actions, just let me know and I'll try to add them.
WARNINGIt works, but it'sexperimental,dangerous. There's some swizzling, some direct calls to ObjC instances, passing guessed parameter values, etc. It can crash your Pythonista, you can lost data, ... I warned you :) If you will modify this code and Pythonista will crash during startup, just openpythonista3://in your browser and fix your changes.
Will write more issues, so, Pythonista modules can be enhanced (like
editormodule functions to close tab, open file, etc.). Then I can remove some of these crazy calls.
Back to open quickly ... It just supports Python (
.py) and Markdown (.md) files. That's because I have no clueyetwhat should I pass as editor type, when force reload should be used, ... Also some directories are excluded, so, you know why if some files are missing in open quickly dialog. Also the open quickly dialog has hidden title bar -> no close button -> you need to hit Esc (or Ctrl [ on smart keyboard) to close it.
Anyway, it's pretty scary, fascinating, ... that we can do all these things directly on iPad. Many thanks to Ole for this wonderful tool. This tool let me play with Python on iPad and I can drive our AWS services directly from iPad as well. I'm not forced to bring MBP everywhere, iPad is enough :) Thanks again.
Enjoy!
zrzka
I had probably older StaSH. Did update StaSH and you're right, now
git clonebehaves correctly. So the clone command is:
git clone https://github.com/zrzka/pythonista-site-packages-3.git .(<- space dot at the end)
Phuket2
@zrzka , perfect, works great. Thanks for your help. You might want to make a note in your installation instructions about the trailing period. I can see you changed it, but this would trip up newbies like me.
Done :) More updates later, going to take week off :)
Phuket2
@zrzka , ok. Enjoy. Thanks again.
wolf71
cool,perfect,thanks.
and wish @omz next pythonista update can support it native.
@zrzka, hey. I know you are off right now, hope you are having a good break. But when you return could you consider having a (dedicated) key combo (cmd-something) that you can apply to invoke a given wrench menu item. Eg. Maybe the user could pass the name of the wrench item name as a param in the bm.start('StaSh'). I guess it could also be a .py filename. Although that seems a bit more on the wild side. Maybe you have some better ideas than that.
I also mention dedicated, because I think it would get infinitely more complicated to let users to map their own keys.
Anyway thanks again, still only a day or so with what you have done so far and its very useful for me with the Apple iPad Pro keyboard.
@Phuket2 thanks for the suggestions.
Open Quickly
I would like to reuse Open Quickly dialog (when I refactor it) for:
Run Quickly (search for just .pyand run it, basically emulation of open and tapping on play),
Wrench Quickly, same idea, but you can search for wrench items.
UI for mapping keys
I'm not going to provide UI for mapping keys, because it's a lot of work, which can be replaced with something more simpler. I can remove HW shortcuts registration from
bm.start, can provide something likebm.register_default_key_commands. And if you don't call this function in yourpythonista_startup.pyfile, feel free to map your own shortcuts viabm.register_key_command. Or call it and add your own afterbm.start().Shortcut for the wrench item
Just do this in your
pythonista_startup.pyfile:
#!python3
import blackmamba.startup as bm
import blackmamba.key_commands as bkc
import blackmamba.uikit as bui
def launch_wrench_item(name):
print('Wrench item: {}'.format(name))
def launch_stash():
launch_wrench_item('StaSh')
bm.start()
bkc.register_key_command(
bkc.PYTHONISTA_SCOPE_EDITOR,
'S',
bui.UIKeyModifierCommand | bui.UIKeyModifierShift,
launch_stash,
'Launch StaSh')
This maps
Cmd-Shift-Stolaunch_stash, where you can do whatever you want :)
Run Quickly (search for just
Some breaking changes pushed. Check Usage section in the readme. Summary:
external_screen.pymoved toblackmamba/experimental
blackmamba/startup.pytrashed
register_default_key_commandsintroduced inblackmamba/__init__.py
removed scope in blackmamba/key_commands.py
usage examples updated
repository renamed (pythonista-site-packages-3->blackmamba)
If you don't want to check all these changes, just update your
pythonista_startup.pycontent:
#!python3
import blackmamba as bm
bm.register_default_key_commands()
@zrzka , hey thanks. The git pull worked perfectly. Thanks for your help to get it setup correctly. Makes a huge difference being able to do that. I haven't added my own keys yet, but will put some thought into it. I am always running 'Check Style' and 'Reformat Code' these days so I am guessing, i just need to find these scripts and run them from function stubs like you do with the hallo example. Anyway, will give it a go later.
Thanks again. This is really fantastic with an external keyboard. I am sure a lot of other apps would be envious of this ability.
Oppps, sorry, I missed the post above this...Looks like the wrench keys have been handled. That's great. I will try them now!!!!
Phuket2
@Phuket2 wrench item(s) are not handled yet. It's just silly example how to print smth with keyboard shortcut. I'll try to add run script / wrench item today. Then you'll be able to use it.
@zrzka , ok. Cool. I had a lame attempt to get it working and started going down a rabbit hole. But for me it will be a big help. Esp the check styles/reformat code. @ccc has beaten me with a big stick so I don't dare to push anything anymore until I have done all the checks :) I find it super annoying and time consuming. But I am happy I am staring to take the time to do things properly. Just a matter of time it will become second nature.
Ok, will keep a look out for the next update :)
@Phuket2 I did refactor my picker, thus I was able to add Run Quickly... (Cmd Shift R) and Wrench Quickly... (Cmd Option R).
Butit does work only and only if scripts are Python 3 compatible. Otherwise you can run them, but they will fail to execute. See another thread for more info. Sorry for this, will try to solve it somehow.
It's useless for StaSh (Python 2) and maybe for many more scripts.
Another update ...
wrench_pickerrenamed toaction_picker
Wrench Quickly...renamed toAction Quickly...with new shortcutCmd-Shift-A
ide.run_actionadded (see example below)
slight Action Quickly... UI improvements
title is custom title or just script name without extension if title is not provided
subtitle is script path
... and here's an example how to register custom shortcut to launch StaSh for example ...
#!python3
import blackmamba as bm
from blackmamba.key_commands import register_key_command
from blackmamba.uikit import * # UIKeyModifier*
import blackmamba.ide as ide
bm.register_default_key_commands()
def launch_stash():
ide.run_action('StaSh') # <- editor action custom title, case sensitive
# or ide.run_script('launch_stash.py')
register_key_command('S', UIKeyModifierCommand | UIKeyModifierShift,
launch_stash, 'Launch StaSh...')
ide.run_actionaccepts editor action custom title and it's case sensitive. Another option is toignoreeditor actions and use justide.run_scriptwith script name.zrzka
Another installation method added (StaSh & Pip). Check readme. This is preferred way to install Black Mamba. The old git way still works and will work.
Hmm, StaSh & pip & GitHub doesn't support update. Hmm.
Okay, managed to create PyPI package. So, it's installable via:
cd ~/Documents
pip install blackmamba -d site-packages-3
But there's an issue with XML RPC and PyPI, see issue #264. So far, the workaround is to change line number 899 in the
site-packages/stash/bin/pip.pyfile from ...
hits = self.pypi.package_releases(pkg_name, True) # True to show all versions
... to ...
hits = self.pypi.package_releases(pkg_name, False)
This fixes the pip issue. Or at least, it looks like it does.
I give up smoking last night and changed to vamping instead. Maybe this was not a good week to do that :)
For those who are using
git, feel free to pull:
Basically did add more complex sample pythonista_startup.py (readme) and ability to set which folders are ignored in
Run/Open Quickly...dialogs. Now going to figure out how to publish PyPI package on iPad, left MBP at home for two days :)
0.0.11 released (git & pip):
two shortcuts modified
Ctrl-Shift-Badded for clear annotations & pyflakes (Analyze)
P.S. Did want to use
Cmd-Shift-B(Xcode sync), but it's already used in Pythonista for toggle breakpoint.WARNINGI did release the package via PyPI as well, but StaSh pip doesn't see it. Thinking what should I do with this :)
|
Intro
This example will probably get used in my advanced Raspberry Pi photo frame treatment. I use image display software, fbi, which is designed to take key presses in order to take certain actions, such as, advance to the next picture, display information about the current picture, etc. qiv works similarly. But I am running an automated picture display, so no one is around to type into the keyboard. hence the need for software emulation of the physical keyboard. I’ve always believed it must be possible, but never knew how until this week.
I am not a python programmer, but sometimes you gotta use whatever language makes the job easier, and I only know how to do this in python, python3 specifically.
This will probably only make sense for Raspberry Pi owners.
Setup
I believe this will work best if your Raspberry Pi has only a keyboard and not a mouse hooked up because in that case your keyboard ought to be mapped to /dev/input/event0. But it’s easy enough to change that. To see whether your keyboard is /dev/input/event0 or /dev/input/event1 or some other, just cat one of those files and start typing. You should see some junk when you’ve selected the right /dev/input file.
The program
I call it keyinject.py.
#!/usr/bin/python3
# inject a single key, acting like a software keyboard
# DrJ 12/20
import sys
from evdev import UInput, InputDevice, ecodes as e
from time import sleep
# set DEBUG = True to print out more information
DEBUG = False
sleepTime = 0.001 # units are secs
# dict of name mappings. key is how we like to enter it, value is what is after KEY_ in evdev ecodes
d = {
‘1’ : ‘1’,
‘2’ : ‘2’,
‘3’ : ‘3’,
‘4’ : ‘4’,
‘5’ : ‘5’,
‘6’ : ‘6’,
‘7’ : ‘7’,
‘8’ : ‘8’,
‘8’ : ‘8’,
‘9’ : ‘9’,
‘0’ : ‘0’,
‘a’ : ‘A’,
‘b’ : ‘B’,
‘c’ : ‘C’,
‘d’ : ‘D’,
‘e’ : ‘E’,
‘f’ : ‘F’,
‘g’ : ‘G’,
‘h’ : ‘H’,
‘i’ : ‘I’,
‘j’ : ‘J’,
‘k’ : ‘K’,
‘l’ : ‘L’,
‘m’ : ‘M’,
‘n’ : ‘N’,
‘o’ : ‘O’,
‘p’ : ‘P’,
‘q’ : ‘Q’,
‘r’ : ‘R’,
‘s’ : ‘S’,
‘t’ : ‘T’,
‘u’ : ‘U’,
‘v’ : ‘V’,
‘w’ : ‘W’,
‘x’ : ‘X’,
‘y’ : ‘Y’,
‘z’ : ‘Z’,
‘.’ : ‘DOT’,
‘,’ : ‘COMMA’,
‘/’ : ‘SLASH’,
‘E’: ‘ENTER’,
‘S’: ‘RIGHTSHIFT’,
‘C’: ‘LEFTCTRL’
}
# https://python-evdev.readthedocs.io/en/latest/tutorial.html
inputchars = sys.argv
ltrs = inputchars[1]
# get rid of program name
keybd = InputDevice(“/dev/input/event0”)
for ltr in ltrs:
ui = UInput.from_device(keybd, name=”keyboard-device”)
if DEBUG: print(ltr)
mappedkey = d[ltr]
key = “KEY_” + mappedkey
if DEBUG: print(key)
if DEBUG: print(e.ecodes[key])
ui.write(e.EV_KEY, e.ecodes[key], 1) # KEY_ down
ui.write(e.EV_KEY, e.ecodes[key], 0) # KEY_ up
ui.syn()
sleep(sleepTime)
ui.close()
And it gets called like this:
$ sudo ./keyinject.py my.injected.letters
or
$ sudo ./keyinject.py ./m2.plE
to run the m2.pl script in the current directory and have it behave as though it were launched from a console terminal. The “E” is the ENTER key.
Interesting observations
There really is no such thing as a separate “k” key and “K” key (lower-case versus upper-case). There is only a single key labelled “K” on a keyboard. It’s a physical layer versus logical layer type of thing. The k and K are characters.
In the above program I did some of the keys – the ones I will be needing, plus a few bonus ones. I do need the ENTER key, and I can’t think of a way to convey that to this program, so to send ENTER you would do
$ sudo ./keyinject.py ENTER
But I was able to have these characters represent themselves: . , / so that’s not bad.
Prerequisites
You will need pyhon3 version > 3.5, I think. And the evdev package. I believe you get that with
$ sudo pip3 install evdev
And if you don’t have pip3 you can install that with
$ sudo apt-get update python3-pip
Reading keyboard input
Of course the opposite of simulating key presses is reading what’s been typed from an actual keyboard. That’s possible too with this handy evdev package. The following program is not as polished as the writing program, but it gives you the feel for what to do. I call it evread.py.
#!/usr/bin/python3
# https://python-evdev.readthedocs.io/en/latest/tutorial.html
import asyncio
from evdev import InputDevice, categorize, ecodes
dev = InputDevice(‘/dev/input/event0’)
# following line is optional – it takes away the keybd from fbi!
# there is also a dev.ungrab()
dev.grab()
async def helper(dev):
async for ev in dev.async_read_loop():
print(repr(ev))
loop = asyncio.get_event_loop()
loop.run_until_complete(helper(dev))
Note the presence of the dev.grab(). That permits your program to be the exclusive reader of keyboard input, shutting out fbi. It can be commented out if you want to share.
Conclusion
We have created an example program, mostly for Raspberry Pi, though easily adapted to other linux environments, that injects keyboard presses, via a python3 program, as though those keys had been typed by someone using the physical keyboard such that, graphics programs which rely on this, such as fbi or qiv (and probably others – vlc?), can be controlled through software.
We have also provided a basic python program for reading key presses from the actual keyboard. I plan to use these things for my advanced RPi photo frame project.
References and related
The python evdev tutorial is really helpful: https://python-evdev.readthedocs.iogeoh/en/latest/tutorial.html
Raspberry Pi advanced photo frame article does not exist yet. The basic RPi photo frame article is here.
Another piece to the puzzle is turning GPS coordinates into a town name. That brief write-up is here.
|
Black Mamba - open quickly and other shortcuts
Hi all,
I mainly work with Pythonista on the biggest iPad with external keyboard. And I miss some features like Open quickly, registering shortcuts for my scripts, ... so I decided to add them by myself. Hope it will be natively supported in the future.
Here's screen recording of what I can do with external keyboard only. If you would like to try it, feel free to clone
allfiles from pythonista-site-packages-3 repository to your localsite-packages-3folder. Just download them or usegitvia StaSH.
Then you can hit ...
Cmd /- to toggle comments
Cmd N- to open new tab and to show new file dialog
Cmd Shift N- to open just new tab
Cmd 0(zero) - to toggle library view (I call it navigator)
Cmd W- to close current tab
Cmd Shift W- to close all tabs except current one
Cmd O(letter o) - to quickly open files
If you need more shortcuts for more actions, just let me know and I'll try to add them.
WARNINGIt works, but it'sexperimental,dangerous. There's some swizzling, some direct calls to ObjC instances, passing guessed parameter values, etc. It can crash your Pythonista, you can lost data, ... I warned you :) If you will modify this code and Pythonista will crash during startup, just openpythonista3://in your browser and fix your changes.
Will write more issues, so, Pythonista modules can be enhanced (like
editormodule functions to close tab, open file, etc.). Then I can remove some of these crazy calls.
Back to open quickly ... It just supports Python (
.py) and Markdown (.md) files. That's because I have no clueyetwhat should I pass as editor type, when force reload should be used, ... Also some directories are excluded, so, you know why if some files are missing in open quickly dialog. Also the open quickly dialog has hidden title bar -> no close button -> you need to hit Esc (or Ctrl [ on smart keyboard) to close it.
Anyway, it's pretty scary, fascinating, ... that we can do all these things directly on iPad. Many thanks to Ole for this wonderful tool. This tool let me play with Python on iPad and I can drive our AWS services directly from iPad as well. I'm not forced to bring MBP everywhere, iPad is enough :) Thanks again.
Enjoy!
zrzka
Phuket2
@zrzka , perfect, works great. Thanks for your help. You might want to make a note in your installation instructions about the trailing period. I can see you changed it, but this would trip up newbies like me.
Done :) More updates later, going to take week off :)
Phuket2
@zrzka , ok. Enjoy. Thanks again.
wolf71
cool,perfect,thanks.
and wish @omz next pythonista update can support it native.
@zrzka, hey. I know you are off right now, hope you are having a good break. But when you return could you consider having a (dedicated) key combo (cmd-something) that you can apply to invoke a given wrench menu item. Eg. Maybe the user could pass the name of the wrench item name as a param in the bm.start('StaSh'). I guess it could also be a .py filename. Although that seems a bit more on the wild side. Maybe you have some better ideas than that.
I also mention dedicated, because I think it would get infinitely more complicated to let users to map their own keys.
Anyway thanks again, still only a day or so with what you have done so far and its very useful for me with the Apple iPad Pro keyboard.
@Phuket2 thanks for the suggestions.
Open Quickly
I would like to reuse Open Quickly dialog (when I refactor it) for:
Run Quickly (search for just .pyand run it, basically emulation of open and tapping on play),
Wrench Quickly, same idea, but you can search for wrench items.
UI for mapping keys
I'm not going to provide UI for mapping keys, because it's a lot of work, which can be replaced with something more simpler. I can remove HW shortcuts registration from
bm.start, can provide something likebm.register_default_key_commands. And if you don't call this function in yourpythonista_startup.pyfile, feel free to map your own shortcuts viabm.register_key_command. Or call it and add your own afterbm.start().Shortcut for the wrench item
Just do this in your
pythonista_startup.pyfile:
#!python3
import blackmamba.startup as bm
import blackmamba.key_commands as bkc
import blackmamba.uikit as bui
def launch_wrench_item(name):
print('Wrench item: {}'.format(name))
def launch_stash():
launch_wrench_item('StaSh')
bm.start()
bkc.register_key_command(
bkc.PYTHONISTA_SCOPE_EDITOR,
'S',
bui.UIKeyModifierCommand | bui.UIKeyModifierShift,
launch_stash,
'Launch StaSh')
This maps
Cmd-Shift-Stolaunch_stash, where you can do whatever you want :)
Run Quickly (search for just
Some breaking changes pushed. Check Usage section in the readme. Summary:
external_screen.pymoved toblackmamba/experimental
blackmamba/startup.pytrashed
register_default_key_commandsintroduced inblackmamba/__init__.py
removed scope in blackmamba/key_commands.py
usage examples updated
repository renamed (pythonista-site-packages-3->blackmamba)
If you don't want to check all these changes, just update your
pythonista_startup.pycontent:
#!python3
import blackmamba as bm
bm.register_default_key_commands()
@zrzka , hey thanks. The git pull worked perfectly. Thanks for your help to get it setup correctly. Makes a huge difference being able to do that. I haven't added my own keys yet, but will put some thought into it. I am always running 'Check Style' and 'Reformat Code' these days so I am guessing, i just need to find these scripts and run them from function stubs like you do with the hallo example. Anyway, will give it a go later.
Thanks again. This is really fantastic with an external keyboard. I am sure a lot of other apps would be envious of this ability.
Oppps, sorry, I missed the post above this...Looks like the wrench keys have been handled. That's great. I will try them now!!!!
Phuket2
@Phuket2 wrench item(s) are not handled yet. It's just silly example how to print smth with keyboard shortcut. I'll try to add run script / wrench item today. Then you'll be able to use it.
@zrzka , ok. Cool. I had a lame attempt to get it working and started going down a rabbit hole. But for me it will be a big help. Esp the check styles/reformat code. @ccc has beaten me with a big stick so I don't dare to push anything anymore until I have done all the checks :) I find it super annoying and time consuming. But I am happy I am staring to take the time to do things properly. Just a matter of time it will become second nature.
Ok, will keep a look out for the next update :)
@Phuket2 I did refactor my picker, thus I was able to add Run Quickly... (Cmd Shift R) and Wrench Quickly... (Cmd Option R).
Butit does work only and only if scripts are Python 3 compatible. Otherwise you can run them, but they will fail to execute. See another thread for more info. Sorry for this, will try to solve it somehow.
It's useless for StaSh (Python 2) and maybe for many more scripts.
Another update ...
wrench_pickerrenamed toaction_picker
Wrench Quickly...renamed toAction Quickly...with new shortcutCmd-Shift-A
ide.run_actionadded (see example below)
slight Action Quickly... UI improvements
title is custom title or just script name without extension if title is not provided
subtitle is script path
... and here's an example how to register custom shortcut to launch StaSh for example ...
#!python3
import blackmamba as bm
from blackmamba.key_commands import register_key_command
from blackmamba.uikit import * # UIKeyModifier*
import blackmamba.ide as ide
bm.register_default_key_commands()
def launch_stash():
ide.run_action('StaSh') # <- editor action custom title, case sensitive
# or ide.run_script('launch_stash.py')
register_key_command('S', UIKeyModifierCommand | UIKeyModifierShift,
launch_stash, 'Launch StaSh...')
ide.run_actionaccepts editor action custom title and it's case sensitive. Another option is toignoreeditor actions and use justide.run_scriptwith script name.zrzka
Another installation method added (StaSh & Pip). Check readme. This is preferred way to install Black Mamba. The old git way still works and will work.
Hmm, StaSh & pip & GitHub doesn't support update. Hmm.
Okay, managed to create PyPI package. So, it's installable via:
cd ~/Documents
pip install blackmamba -d site-packages-3
But there's an issue with XML RPC and PyPI, see issue #264. So far, the workaround is to change line number 899 in the
site-packages/stash/bin/pip.pyfile from ...
hits = self.pypi.package_releases(pkg_name, True) # True to show all versions
... to ...
hits = self.pypi.package_releases(pkg_name, False)
This fixes the pip issue. Or at least, it looks like it does.
I give up smoking last night and changed to vamping instead. Maybe this was not a good week to do that :)
For those who are using
git, feel free to pull:
Basically did add more complex sample pythonista_startup.py (readme) and ability to set which folders are ignored in
Run/Open Quickly...dialogs. Now going to figure out how to publish PyPI package on iPad, left MBP at home for two days :)
0.0.11 released (git & pip):
two shortcuts modified
Ctrl-Shift-Badded for clear annotations & pyflakes (Analyze)
P.S. Did want to use
Cmd-Shift-B(Xcode sync), but it's already used in Pythonista for toggle breakpoint.WARNINGI did release the package via PyPI as well, but StaSh pip doesn't see it. Thinking what should I do with this :)
@zrzka , i am still using git pull. Working great. Ctrl-shift-b working perfectly!
I am going to add an issue to the Pythonista github issues about the hud display delay for 'check style' and 'Analyse'. I feel its twice the time it should be. Depends what @omz thinks. It does not deserve its own param setting in my view (more important things). Can live with what it is now, but seems to me that the hud should be shown for half the time.
|
python 竊� VBA
python縺ッ遏・縺」縺ヲ縺�繧九��C縺ッ縲�騾比クュ縺ァ謚輔£縺溘��
縺昴≧縺�縺�邨檎キッ縺ョ閾ェ蛻�縺後�〃BA�シ�Excel�シ峨↓謇九r蜃コ縺吶��
python縺ィ蜷後§繧医≧縺ォ菴ソ縺医k繧医≧縺ォ縺ェ繧翫◆縺�縲�
縺ゅo繧医¥縺ー縲・xcel縺ィ縺仇ord縺ィ縺九〒繧「繝九Γ繝シ繧キ繝ァ繝ウ縺励◆縺�縲�
Excel縺ョ縲碁幕逋コ縲阪ち繝悶°繧峨�〃BA縺後〒縺阪k縲�
Excel縺ァVBA縺御スソ逕ィ縺ァ縺阪k繧医≧縺ォ縺吶k縺溘a縺ォ縺ッ縲√�碁幕逋コ縲阪ち繝悶′蠢�隕√〒縺吶��
縲碁幕逋コ縲阪ち繝悶�ッ縲√ョ繝輔か繝ォ繝医〒髱櫁。ィ遉コ縺ァ縺吶��
陦ィ遉コ縺吶k縺溘a縺ォ縺ッ縲∝�域律縺ョ蛻�繧貞盾辣ァ縺励※縺上□縺輔>縲�
縲掲or縲肴ァ区枚竊偵�熊or縲肴ァ区枚縺ィ縲�騾」逡ェ
python繧剃スソ縺」縺歿or郢ー繧願ソ斐@讒区枚
for i in range(10):
print i
>>> for i in range(10):
窶ヲ print i
窶ヲ
for繧剃スソ縺医�ー縲∫ケー繧願ソ斐☆縺溘�ウ縺ォ縲√う繝ウ繝�繝�繧ッ繧ケ逕ィ縺ョ螟画焚縺悟、峨o縺」縺ヲ縺�縺上�ョ縺ァ縲√◎繧後r蜿ら�ァ縺吶k蛻�繧断or譁�縺ョ荳ュ縺ァ菴ソ縺医�ー縲√■繧�縺」縺ィ縺壹▽蜃ヲ逅�繧貞、峨∴縺ヲ縺�縺丈コ九′縺ァ縺阪k縲�
荳翫�ョpython縺ョ萓九〒縺ッ縲}rint髢「謨ー縺ョ蠑墓焚縺ォ縲√う繝ウ繝�繝�繧ッ繧ケ逕ィ縺ョ螟画焚繧貞�・繧後※縺�繧九�ョ縺ァ縲∫ケー繧願ソ斐☆縺溘�ウ縺ォ縲∵ィ呎コ門�コ蜉帙↓陦ィ遉コ縺輔l繧句��螳ケ縺悟、峨o繧九��
縺薙l繧歎BA縺ョ繧、繝溘ョ繧」繧ィ繧、繝医え繧」繝ウ繝峨え縺ァ繧�繧九��
郢ー繧願ソ斐@讒区枚縺ッ縲∽ス輔i縺九�ョ繧ス繝輔ヨ縺ョAPI縺ァ逕サ蜒上お繧ッ繧ケ繝昴�シ繝域ゥ溯�ス縺後≠縺」縺溷�エ蜷医↓縲�騾」逡ェ逕サ蜒上r謗貞�コ縺吶k逶ョ逧�縺ァ菴ソ縺�縲�
VBA縺ァFor讒区枚
Sub test()
Dim i
For i = 0 To 9
Debug. Print i
Next
End Sub
縺ァ縲√〒縺阪◆縲�
0 fill(�シ仙沂繧�)縺ョ縺溘a縲’ormat繧剃スソ縺�
Sub test()
Dim i
For i = 0 To 10
a = Format(i, "0000")
Debug.Print a
Next
End Sub
蠑墓焚繧�3蛟阪☆繧矩未謨ー繧剃ス懊▲縺溘��
python縺ァ繧�繧九→縲√%縺�縺ェ縺」縺溘��
def Abcd(a):
b = a*3
return b
Abcd(10)
>>> def Abcd(a):
窶ヲ b = a*3
窶ヲ return b
窶ヲ
>>> Abcd(10)
30
>>>
VB縺�縺ィ縺薙≧縲�
Public Function Abcd(ByVal a As Integer) As Integer
Dim b
b = 3 * a
Abcd = b
End Function
Sub test()
Dim a
a = Abcd(10)
Debug.Print a
End Sub
30
繧�繧後�ー繧�繧九⊇縺ゥ縲√�継ython縺吶£縺�窶ヲ縲阪↑繧九��
VBA縺ョ縺セ縺�繧九▲縺薙@縺輔→縺阪◆繧俄�ヲ
謌サ繧雁�、縺ョ縺ゅk髢「謨ー繧剃ス懊k譎ゅ↓縺ッ縲ヽeturn繧剃スソ縺�縺ェ
VBA縺ァ縲�髢「謨ー繧剃ス懊k譎ゅ�∵綾繧雁�、縺ョ謖�螳壹↓Return縺ッ菴ソ縺医↑縺�縲�
縲梧ァ区枚繧ィ繝ゥ繝シ縲阪↓縺ェ繧九°繧峨��
謌サ繧雁�、縺ョ謖�螳壹r縺励◆縺代l縺ー
髢「謨ー蜷� �シ� 謌サ繧雁�、
縺ィ縺吶k莠九′縲ヽeturn縺ョ莉」繧上j縺ォ縺ェ繧九��
VB縺ァ縺ッ縲ヽeturn縺御スソ縺医k縺後�〃BA縺ァ縺ッReturn縺御スソ縺医↑縺�縲�
VB縺ィVBA縺ッ縲∵枚豕輔′莨シ縺ヲ縺�繧九□縺代〒縲�驕輔≧繧ゅ�ョ縺ェ繧薙□縺ィ蟄ヲ鄙偵〒縺阪◆縲�
縺ゅ→縲ヾub繧1ublic Function繧ゅ�√�励Ο繧キ繝シ繧ク繝」縺ィ蜻シ縺ー繧後k菴輔°縺ェ縺ョ縺�縺ィ遏・縺」縺溘��
|
複数のグラフを一括表示した場合のX軸、Y軸の表示範囲指定
前にsubplots、もしくはsubplotを使って複数のグラフを一括表示する方法を解説しました。
今回はこの複数のグラフそれぞれのX軸、Y軸の表示範囲を変更する方法を解説していきます。
subplots、subplotのどちらを使ったかによって表示範囲の指定方法が変わりますので、注意してください。
まずはsubplotを使った場合を解説していきましょう。
subplotsを使った場合の表示範囲指定の方法
まずはおさらいからです。
subplotsを使った場合、複数のグラフを表示する方法はこんな感じでした。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
axes= fig.subplots(2, 2)
axes[0][0].plot(x, y1)
axes[1][0].plot(x, y2)
axes[0][1].plot(x, y3)
axes[1][1].plot(x, y4)
plt.show()
実行結果
subplotsの場合、X軸、Y軸の表示範囲を指定するには、2軸グラフのX軸、Y軸の表示範囲を指定したのと同様にset_xlim(最大値, 最小値)を用います。
先ほどの4つのグラフのうち、左上と右下のグラフの表示範囲を変えてみましょう。
その場合、左上のグラフの場所がaxes[0][0]で、右下のグラフの場所がaxes[1][1]なので、表示範囲を指定するために追加するコマンドはこうなります。
axes[0][0].set_xlim(最大値,最小値)
axes[0][0].set_ylim(最大値,最小値)
axes[1][1].set_xlim(最大値,最小値)
axes[1][1].set_ylim(最大値,最小値)
最大値と最小値を適当に指定して、試してみましょう。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
axes= fig.subplots(2, 2)
axes[0][0].plot(x, y1)
axes[1][0].plot(x, y2)
axes[0][1].plot(x, y3)
axes[1][1].plot(x, y4)
axes[0][0].set_xlim(2,5)
axes[0][0].set_ylim(0,10)
axes[1][1].set_xlim(5,8)
axes[1][1].set_ylim(10,20)
plt.show()
実行結果
表示範囲を変更することができました。
subplotを使った場合の表示範囲指定の方法
次にsubplotを使った場合の表示範囲の指定方法を解説していきます。
まずはおさらいから。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
plt.subplot(2,2,1)
plt.plot(x, y1)
plt.subplot(2,2,2)
plt.plot(x, y2)
plt.subplot(2,2,3)
plt.plot(x, y3)
plt.subplot(2,2,4)
plt.plot(x, y4)
plt.show()
実行結果
subplotを使った場合は、xlim(最小値, 最大値)、ylim(最小値, 最大値)を使って表示範囲を指定します。
ただし重要なのがxlim、ylimを書く”場所”です。
例えば左上のグラフの表示範囲を制限したい場合は、左上のグラフのデータをプロットした後に書きます。
分かりにくいと思うので、ちょっと詳しく書いていきます。
それぞれのグラフをプロットしているのはプログラムのこの部分になります。
plt.subplot(2,2,1)
plt.plot(x, y1)
plt.subplot(2,2,2)
plt.plot(x, y2)
plt.subplot(2,2,3)
plt.plot(x, y3)
plt.subplot(2,2,4)
plt.plot(x, y4)
上から左上、右上、左下、右下です。
分かりやすい様、コメントアウトして書き込んでおきます。
左上:upper left、右上:upper right、左下:lower left、右下:lower right
#upper left
plt.subplot(2,2,1)
plt.plot(x, y1)
#upper right
plt.subplot(2,2,2)
plt.plot(x, y2)
#lower left
plt.subplot(2,2,3)
plt.plot(x, y3)
#lower right
plt.subplot(2,2,4)
plt.plot(x, y4)
左上のグラフの表示範囲を指定したい場合は、この位置にxlim、ylimを書きます。
#upper left
plt.subplot(2,2,1)
plt.plot(x, y1)
plt.xlim(最小値, 最大値)
plt.ylim(最小値, 最大値)
#upper right
plt.subplot(2,2,2)
plt.plot(x, y2)
#lower left
plt.subplot(2,2,3)
plt.plot(x, y3)
#lower right
plt.subplot(2,2,4)
plt.plot(x, y4)
最小値、最大値を適当に入れて試してみましょう。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
plt.subplot(2,2,1)
plt.plot(x, y1)
plt.xlim(2,5)
plt.ylim(0,10)
plt.subplot(2,2,2)
plt.plot(x, y2)
plt.subplot(2,2,3)
plt.plot(x, y3)
plt.subplot(2,2,4)
plt.plot(x, y4)
plt.show()
実行結果
左上のグラフの表示範囲が変わりました。
では今度はさらに右下のグラフの表示範囲を変えてみましょう。
ということで今度はこの位置にxlim、ylimを追加します。
#upper left
plt.subplot(2,2,1)
plt.plot(x, y1)
#upper right
plt.subplot(2,2,2)
plt.plot(x, y2)
#lower left
plt.subplot(2,2,3)
plt.plot(x, y3)
#lower right
plt.subplot(2,2,4)
plt.plot(x, y4)
plt.xlim(最小値, 最大値)
plt.ylim(最小値, 最大値)
先ほどの左上の範囲指定と合わせて、全体のプログラムとしてはこうなります。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
plt.subplot(2,2,1)
plt.plot(x, y1)
plt.xlim(2,5)
plt.ylim(0,10)
plt.subplot(2,2,2)
plt.plot(x, y2)
plt.subplot(2,2,3)
plt.plot(x, y3)
plt.subplot(2,2,4)
plt.plot(x, y4)
plt.xlim(5,8)
plt.ylim(10,20)
plt.show()
実行結果
今度は右下のグラフの表示範囲が変わりました。
この場合、コマンド自体はplt.xlim(最大値, 最小値)、もしくはplt.ylim(最大値, 最小値)とコマンドは同じであり、コマンドを書き込む場所により表示範囲を指定するグラフが変わることに注意してください。
まとめ
今回は複数のグラフを一括表示した際のX軸、Y軸の表示範囲の変更方法を解説しました。
複数のグラフを一括表示するには、subplotsとsubplotという二つの方法があることを前に紹介していますが、それぞれ表示範囲を指定する方法は異なります。
subplotsの場合、X軸の表示範囲の指定はset_xlim(最大値, 最小値)、Y軸の表示範囲の指定はset_ylim(最大値, 最小値)で行います。
ただXX.set_xlim(最大値, 最小値)の様に書き、XXの場所に範囲を指定するグラフそのものを指定します。
subplotの場合、X軸の表示範囲の指定はxlim(最大値, 最小値)、Y軸の表示範囲の指定はylim(最大値, 最小値)で行います。
この場合、表示範囲を指定したいプロットのすぐ後に書くことによって、表示範囲を指定するグラフを指定します。
どちらも一長一短あるでしょうし、ご自分のプログラムの書き方があると思うので、どっちがいいというよりもスタイルに合った方を選べばいいのかなと思います。
次回はsubplotsを使った際のタイトルの表示方法を解説していきます。
ということで今回はこんな感じで。
|
0x00 openpyxl模块
这个模块可以让你读写excel文件
0x01 读取数据
代码如下:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from openpyxl import load_workbook
#设置data_only=True,否则如果文件中有计算公式的话读出来的是公式不是数值
wb = load_workbook(filename='aa.xlsx',data_only=True)
sheetnames = wb.get_sheet_names() #获得所有表名
print u"存在表:%s" % sheetnames
ws = wb.get_sheet_by_name(sheetnames[0])
print u"第一张表表名为:%s" % ws.title #Sheet1
rows = ws.max_row #行数
columns = ws.max_column #列数
print "表%s有%d行%d列" % (ws.title,rows,columns) #10 2 共10行2列
print
print u"取部分数据:"
print ws['A1'].value,ws['B1'].value
print ws['A2'].value,ws['B2'].value
print ws.cell(row=1, column=2).value
print u"\n输出表%s的所有数据:" % ws.title
for x in range(1,rows+1):
for y in range(1,columns+1):
print ws.cell(row=x,column=y).value,'\t',
print
结果如下:
0x02 写入数据
代码如下:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from openpyxl import Workbook
wb = Workbook()
# 建表
ws1 = wb.active # 第一张表需要这样写,代表从第一张表开始
ws1.title = 's1'
ws2 = wb.create_sheet(title='s2')
ws3 = wb.create_sheet(title='s3')
# 写入数据
ws1['A1'] = 1111
ws1['A2'] = 2222
ws1['A3'] = 3333
ws2['A1'] = 'ssssssssss'
ws2['B1'] = 'dddddddddd'
for x in range(1,4):
for y in range(1,8):
v = int(str(x)+str(y))
_ = ws3.cell(column=x,row=y,value=v)
wb.save(filename='test.xlsx') # 保存数据
结果如下:
0x03 实际案例
$ ls111.txt 222.txt t2x.py$ cat 111.txtaaa--111--AAAbbb--222--BBBccc--333--CCCddd--444--DDD$ cat 222.txthello--xiaoming--hello,world!test--xiaohua--This is a test message.weather--lihua--It will be sunny tomorrow.
以--进行分割,写入xlsx
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# code by reber <1070018473@qq.com>
from openpyxl import Workbook
def write2xlsx():
def get_content(filename):
data = []
with open(filename) as f:
lines = f.readlines()
for line in lines:
data.append(line.strip().split('--'))
return data
def write2sheet(ws,filename):
data = get_content(filename)
# print data
for x in range(1,len(data)+1): #第几行
for y in xrange(1,4): #第几列
print data[x-1][y-1],
_ = ws.cell(row=x,column=y,value=data[x-1][y-1])
print
wb = Workbook()
fs = ['111.txt','222.txt']
for index,filename in enumerate(fs):
sheet_name = filename.split('.')[0]
ws = wb.create_sheet(title=sheet_name,index=index)
write2sheet(ws, filename)
wb.save(filename='test.xlsx')
write2xlsx()
|
在XML文件中查找信息的一套规则/语言,根据XML的元素或者属性进行遍历。
Xpath 开发工具
开源表达式编辑工具:XMLQuire
Chorme插件:Xpath Helper
可以使用谷歌浏览器直接粘贴xpath路径,但是可能通用性不强
使用方法
安装lxml
pip install lxmlconda install lxml
导入etree
from lxml import etree
构建html树
from lxml import etree
text = """
<!DOCTYPE html>
<html lang="zh">
<head>
<meta charset="UTF-8">
<title>这是标题</title>
</head>
<body>
<div style="color:#FF0000"><p>这是段落1</p> </div>
<div style="color:#FFFF00"><p>这是段落2</p> </div>
<div style="color:#000000"><p>这是段落2</p> </div>
</body>
</html>
"""
html = etree.HTML(text)
选取节点
nodename:选取此节点的所有节点
/:根节点或者下一节点
result = html.xpath('/html')
print(result)
"""
[<Element html at 0x7f2448926b40>]
"""
//:选取节点,不考虑位置
result = html.xpath('//div')
print(result)
"""
[<Element div at 0x7f6de30e7960>, <Element div at 0x7f6de30e7910>, <Element div at 0x7f6de30e78c0>]
"""
.:选取当前节点
result = html.xpath('//div')
for r in result:
s = r.xpath('./p') # 选取当前div节点下的p节点
print(s)
"""
[<Element p at 0x7fd70835c780>]
[<Element p at 0x7fd70835c730>]
[<Element p at 0x7fd70835c780>]
"""
..:选取当前节点的父节点
result = html.xpath('//div') # 选取div节点
for r in result:
s = r.xpath('..') # 选取当前节点的父节点,div的父节点是body
print(s)
"""
[<Element body at 0x7f5d65f6f820>]
[<Element body at 0x7f5d65f6f820>]
[<Element body at 0x7f5d65f6f820>]
"""
@:选取属性
result = html.xpath('//div[@style="color:#FF0000"]') # 选取div节点,其中style = "color:#FF0000"
print(result)
"""
[<Element div at 0x7f6673d7b9b0>]
"""
/:一般安装路径查找,表示它的子节点
//:表示它的后代,包括子、孙
提取属性或者文本
text():提取当前节点的文本
result = html.xpath('//div/p/text()') # 选取//div/p下的文本
print(result)
"""
['这是段落1', '这是段落2', '这是段落2']
"""
string(.):提取当前节点以及子孙节点的所有文本
result = html.xpath('string(.)')
print(result)
"""
这是标题
这是段落1
这是段落2
这是段落2
"""
@:提取某个属性,以提取百度官网的所有url为例
import requests
from lxml import etree
from pprint import pprint
import re
url = 'https://www.baidu.com'
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko)\
Chrome/80.0.3987.149 Safari/537.36'
}
response = requests.get(url, headers=headers)
text = response.text
html = etree.HTML(text)
result = html.xpath('//a/@href') # 选取所有a节点下的链接
p = re.compile('http.?://.*?') # 编写正则表达式,提取http或者https开头的网页
list2 = []
for r in result:
result2 = p.match(r) # 检测是否匹配
if result2: # 如果匹配
list2.append(r)
pprint(list2)
"""
['https://passport.baidu.com/v2/?login&tpl=mn&u=http%3A%2F%2Fwww.baidu.com%2F&sms=5',
'https://voice.baidu.com/act/newpneumonia/newpneumonia/?from=osari_pc_1',
'http://news.baidu.com',
'https://www.hao123.com',
'http://map.baidu.com',
'http://v.baidu.com',
'http://tieba.baidu.com',
'http://xueshu.baidu.com',
'https://passport.baidu.com/v2/?login&tpl=mn&u=http%3A%2F%2Fwww.baidu.com%2F&sms=5',
'http://www.baidu.com/gaoji/preferences.html',
'http://www.baidu.com/more/',
'http://ir.baidu.com',
'http://e.baidu.com/?refer=888',
'http://www.beian.gov.cn/portal/registerSystemInfo?recordcode=11000002000001',
'http://tieba.baidu.com/f?kw=&fr=wwwt',
'http://zhidao.baidu.com/q?ct=17&pn=0&tn=ikaslist&rn=10&word=&fr=wwwt',
'http://music.taihe.com/search?fr=ps&ie=utf-8&key=',
'http://image.baidu.com/search/index?tn=baiduimage&ps=1&ct=201326592&lm=-1&cl=2&nc=1&ie=utf-8&word=',
'http://v.baidu.com/v?ct=301989888&rn=20&pn=0&db=0&s=25&ie=utf-8&word=',
'http://map.baidu.com/m?word=&fr=ps01000',
'http://wenku.baidu.com/search?word=&lm=0&od=0&ie=utf-8']
"""
谓语-Predicates
/School/Student[1] :选取School下面第一个节点
/School/Student[last()] : 选取School下面最后一个节点
/School/Student[position()<3] : 选取School下面前三个节点
//Student[@score="99"] 选取属性带有99的节点
Xpath运算符
运算符 描述 实例 返回值
| 计算两个节点集 //book | //cd 返回所有拥有 book 和 cd 元素的节点集
+ 加法 6 + 4 10
- 减法 6 - 4 2
* 乘法 6 * 4 24
div 除法 8 div 4 2
= 等于 price=9.80 如果 price 是 9.80,则返回 true。如果 price 是 9.90,则返回 false。
!= 不等于 price!=9.80 如果 price 是 9.90,则返回 true。如果 price 是 9.80,则返回 false。
< 小于 price<9.80 如果 price 是 9.00,则返回 true。如果 price 是 9.90,则返回 false。
<= 小于或等于 price<=9.80 如果 price 是 9.00,则返回 true。如果 price 是 9.90,则返回 false。
> 大于 price>9.80 如果 price 是 9.90,则返回 true。如果 price 是 9.80,则返回 false。
>= 大于或等于 price>=9.80 如果 price 是 9.90,则返回 true。如果 price 是 9.70,则返回 false。
or 或 price=9.80 or price=9.70 如果 price 是 9.80,则返回 true。如果 price 是 9.50,则返回 false。
and 与 price>9.00 and price<9.90 如果 price 是 9.80,则返回 true。如果 price 是 8.50,则返回 false。
mod 计算除法的余数 5 mod 2 1
|
Dask Release 0.14.1
I’m pleased to announce the release of Dask version 0.14.1. This release contains a variety of performance and feature improvements. This blogpost includes some notable features and changes since the last release on February 27th.
As always you can conda install from conda-forge
conda install -c conda-forge dask distributed
or you can pip install from PyPI
pip install dask[complete] --upgrade
Arrays
Recent work in distributed computing and machine learning have motivated new performance-oriented and usability changes to how we handle arrays.
Automatic chunking and operation on NumPy arrays
Many interactions between Dask arrays and NumPy arrays work smoothly. NumPy arrays are made lazy and are appropriately chunked to match the operation and the Dask array.
>>> x = np.ones(10) # a numpy array >>> y = da.arange(10, chunks=(5,)) # a dask array >>> z = x + y # combined become a dask.array >>> z dask.array<add, shape=(10,), dtype=float64, chunksize=(5,)> >>> z.compute() array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.])
Reshape
Reshaping distributed arrays is simple in simple cases, and can be quite complex in complex cases. Reshape now supports a much more broad set of shape transformations where any dimension is collapsed or merged to other dimensions.
>>> x = da.ones((2, 3, 4, 5, 6), chunks=(2, 2, 2, 2, 2)) >>> x.reshape((6, 2, 2, 30, 1)) dask.array<reshape, shape=(6, 2, 2, 30, 1), dtype=float64, chunksize=(3, 1, 2, 6, 1)>
This operation ends up being quite useful in a number of distributed array cases.
Optimize Slicing to Minimize Communication
Dask.array slicing optimizations are now careful to produce graphs that avoid situations that could cause excess inter-worker communication. The details of how they do this is a bit out of scope for a short blogpost, but the history here is interesting.
Historically dask.arrays were used almost exclusively by researchers with large on-disk arrays stored as HDF5 or NetCDF files. These users primarily used the single machine multi-threaded scheduler. We heavily tailored Dask array optimizations to this situation and made that community pretty happy. Now as some of that community switches to cluster computing on larger datasets the optimization goals shift a bit. We have tons of distributed disk bandwidth but really want to avoid communicating large results between workers. Supporting both use cases is possible and I think that we’ve achieved that in this release so far, but it’s starting to require increasing levels of care.
Micro-optimizations
With distributed computing also comes larger graphs and a growing importance of graph-creation overhead. This has been optimized somewhat in this release. We expect this to be a focus going forward.
DataFrames
Set_index
Set_index is smarter in two ways:
If you set_index on a column that happens to be sorted then we’ll identifythat and avoid a costly shuffle. This was always possible with the sorted=keyword but users rarely used this feature. Now this is automatic.
Similarly when setting the index we can look at the size of the data and determine if there are too many or too few partitions and rechunk the data while shuffling. This can significantly improve performance if there are too many partitions (a common case).
Shuffle performance
We’ve micro-optimized some parts of dataframe shuffles. Big thanks to the Pandas developers for the help here. This accelerates set_index, joins, groupby-applies, and so on.
Fastparquet
The fastparquet library has seen a lot of use lately and has undergone a number of community bugfixes.
Importantly, Fastparquet now supports Python 2.
We strongly recommend Parquet as the standard data storage format for Dask dataframes (and Pandas DataFrames).
Distributed Scheduler
Replay remote exceptions
Debugging is hard in part because exceptions happen on remote machines wherenormal debugging tools like pdb can’t reach. Previously we were able tobring back the traceback and exception, but you couldn’t dive into the stacktrace to investigate what went wrong:
def div(x, y): return x / y >>> future = client.submit(div, 1, 0) >>> future <Future: status: error, key: div-4a34907f5384bcf9161498a635311aeb> >>> future.result() # getting result re-raises exception locally <ipython-input-3-398a43a7781e> in div() 1 def div(x, y): ----> 2 return x / y ZeroDivisionError: division by zero
Now Dask can bring a failing task and all necessary data back to the local machine and rerun it so that users can leverage the normal Python debugging toolchain.
>>> client.recreate_error_locally(future) <ipython-input-3-398a43a7781e> in div(x, y) 1 def div(x, y): ----> 2 return x / y ZeroDivisionError: division by zero
Now if you’re in IPython or a Jupyter notebook you can use the %debug magicto jump into the stacktrace, investigate local variables, and so on.
In [8]: %debug > <ipython-input-3-398a43a7781e>(2)div() 1 def div(x, y): ----> 2 return x / y ipdb> pp x 1 ipdb> pp y 0
Async/await syntax
Dask.distributed uses Tornado for network communication and Tornado coroutinesfor concurrency. Normal users rarely interact with Tornado coroutines; theyaren’t familiar to most people so we opted instead to copy theconcurrent.futures API. However some complex situations are much easier tosolve if you know a little bit of async programming.
Fortunately, the Python ecosystem seems to be embracing this change towards native async code with the async/await syntax in Python 3. In an effort to motivate people to learn async programming and to gently nudge them towards Python 3 Dask.distributed we now support async/await in a few cases.
You can wait on a dask Future
async def f(): future = client.submit(func, *args, **kwargs) result = await future
You can put the as_completed iterator into an async for loop
async for future in as_completed(futures): result = await future ... do stuff with result ...
And, because Tornado supports the await protocols you can also use the existing shadow concurrency API (everything prepended with an underscore) with await. (This was doable before.)
results = client.gather(futures) # synchronous ... results = await client._gather(futures) # asynchronous
If you’re in Python 2 you can always do this with normal yield andthe tornado.gen.coroutine decorator.
Inproc transport
In the last release we enabled Dask to communicate over more things than just TCP. In practice this doesn’t come up (TCP is pretty useful). However in this release we now support single-machine “clusters” where the clients, scheduler, and workers are all in the same process and transfer data cost-free over in-memory queues.
This allows the in-memory user community to use some of the more advanced features (asynchronous computation, spill-to-disk support, web-diagnostics) that are only available in the distributed scheduler.
This is on by default if you create a cluster with LocalCluster without using Nanny processes.
>>> from dask.distributed import LocalCluster, Client >>> cluster = LocalCluster(nanny=False) >>> client = Client(cluster) >>> client <Client: scheduler='inproc://192.168.1.115/8437/1' processes=1 cores=4> >>> from threading import Lock # Not serializable >>> lock = Lock() # Won't survive going over a socket >>> [future] = client.scatter([lock]) # Yet we can send to a worker >>> future.result() # ... and back <unlocked _thread.lock object at 0x7fb7f12d08a0>
Connection pooling for inter-worker communications
Workers now maintain a pool of sustained connections between each other. This pool is of a fixed size and removes connections with a least-recently-used policy. It avoids re-connection delays when transferring data between workers. In practice this shaves off a millisecond or two from every communication.
This is actually a revival of an old feature that we had turned off last year when it became clear that the performance here wasn’t a problem.
Along with other enhancements, this takes our round-trip latency down to 11ms on my laptop.
In [10]: %%time ...: for i in range(1000): ...: future = client.submit(inc, i) ...: result = future.result() ...: CPU times: user 4.96 s, sys: 348 ms, total: 5.31 s Wall time: 11.1 s
There may be room for improvement here though. For comparison here is the sametest with the concurent.futures.ProcessPoolExecutor.
In [14]: e = ProcessPoolExecutor(8) In [15]: %%time ...: for i in range(1000): ...: future = e.submit(inc, i) ...: result = future.result() ...: CPU times: user 320 ms, sys: 56 ms, total: 376 ms Wall time: 442 ms
Also, just to be clear, this measures total roundtrip latency, not overhead. Dask’s distributed scheduler overhead remains in the low hundreds of microseconds.
Related Projects
There has been activity around Dask and machine learning:
dask-learn is undergoing some performance enhancements. It turns out that when you offer distributed grid search people quickly want to scale up their computations to hundreds of thousands of trials.
dask-glm now has a few decent algorithms for convex optimization. The authors of this wrote a blogpost very recently if you’re interested: Developing Convex Optimization Algorithms in Dask
dask-xgboost lets you hand off distributed data in Dask dataframes or arrays and hand it directly to a distributed XGBoost system (that Dask will nicely set up and tear down for you). This was a nice example of easy hand-off between two distributed services running in the same processes.
Acknowledgements
The following people contributed to the dask/dask repository since the 0.14.0 release on February 27th
Antoine Pitrou
Brian Martin
Elliott Sales de Andrade
Erik Welch
Francisco de la Peña
jakirkham
Jim Crist
Jitesh Kumar Jha
Julien Lhermitte
Martin Durant
Matthew Rocklin
Markus Gonser
Talmaj
The following people contributed to the dask/distributed repository since the 1.16.0 release on February 27th
Antoine Pitrou
Ben Schreck
Elliott Sales de Andrade
Martin Durant
Matthew Rocklin
Phil Elson
blog comments powered by Disqus
|
Konvertides MyBB to phpBB tuleb selline veateade?
Kood: Vali kõik
General Error
SQL ERROR [ mysqli ]
Unknown column 'users.yahoo' in 'field list' [1054]
SQL
SELECT users.uid, users.website, users.yahoo, users.aim, users.icq, users.skype, userfields.fid1 FROM mybb_users users LEFT JOIN mybb_userfields AS userfields ON (users.uid = userfields.ufid) ORDER BY users.uid LIMIT 2000
BACKTRACE
FILE: (not given by php)
LINE: (not given by php)
CALL: msg_handler()
FILE: [ROOT]/phpbb/db/driver/driver.php
LINE: 855
CALL: trigger_error()
FILE: [ROOT]/phpbb/db/driver/mysqli.php
LINE: 194
CALL: phpbb\db\driver\driver->sql_error()
FILE: [ROOT]/phpbb/db/driver/mysql_base.php
LINE: 45
CALL: phpbb\db\driver\mysqli->sql_query()
FILE: [ROOT]/phpbb/db/driver/driver.php
LINE: 261
CALL: phpbb\db\driver\mysql_base->_sql_query_limit()
FILE: [ROOT]/install/install_convert.php
LINE: 1263
CALL: phpbb\db\driver\driver->sql_query_limit()
FILE: [ROOT]/install/install_convert.php
LINE: 214
CALL: install_convert->convert_data()
FILE: [ROOT]/install/index.php
LINE: 409
CALL: install_convert->main()
FILE: [ROOT]/install/index.php
LINE: 289
CALL: module->load()
|
ExtUtils::MakeMaker - Create a module Makefile
use ExtUtils::MakeMaker;
WriteMakefile(
NAME => "Foo::Bar",
VERSION_FROM => "lib/Foo/Bar.pm",
);
This utility is designed to write a Makefile for an extension module from a Makefile.PL. It is based on the Makefile.SH model provided by Andy Dougherty and the perl5-porters.
It splits the task of generating the Makefile into several subroutines that can be individually overridden. Each subroutine returns the text it wishes to have written to the Makefile.
As there are various Make programs with incompatible syntax, which use operating system shells, again with incompatible syntax, it is important for users of this module to know which flavour of Make a Makefile has been written for so they'll use the correct one and won't have to face the possibly bewildering errors resulting from using the wrong one.
On POSIX systems, that program will likely be GNU Make; on Microsoft Windows, it will be either Microsoft NMake, DMake or GNU Make. See the section on the "MAKE" parameter for details.
ExtUtils::MakeMaker (EUMM) is object oriented. Each directory below the current directory that contains a Makefile.PL is treated as a separate object. This makes it possible to write an unlimited number of Makefiles with a single invocation of WriteMakefile().
All inputs to WriteMakefile are Unicode characters, not just octets. EUMM seeks to handle all of these correctly. It is currently still not possible to portably use Unicode characters in module names, because this requires Perl to handle Unicode filenames, which is not yet the case on Windows.
See ExtUtils::MakeMaker::FAQ for details of the design and usage.
The long answer is the rest of the manpage :-)
The generated Makefile enables the user of the extension to invoke
perl Makefile.PL # optionally "perl Makefile.PL verbose"
make
make test # optionally set TEST_VERBOSE=1
make install # See below
The Makefile to be produced may be altered by adding arguments of the form KEY=VALUE. E.g.
perl Makefile.PL INSTALL_BASE=~
Other interesting targets in the generated Makefile are
make config # to check if the Makefile is up-to-date
make clean # delete local temp files (Makefile gets renamed)
make realclean # delete derived files (including ./blib)
make ci # check in all the files in the MANIFEST file
make dist # see below the Distribution Support section
MakeMaker checks for the existence of a file named test.pl in the current directory, and if it exists it executes the script with the proper set of perl -I options.
MakeMaker also checks for any files matching glob("t/*.t"). It will execute all matching files in alphabetical order via the Test::Harness module with the -I switches set correctly.
You can also organize your tests within subdirectories in the t/ directory. To do so, use the test directive in your Makefile.PL. For example, if you had tests in:
t/foo
t/foo/bar
You could tell make to run tests in both of those directories with the following directives:
test => {TESTS => 't/*/*.t t/*/*/*.t'}
test => {TESTS => 't/foo/*.t t/foo/bar/*.t'}
The first will run all test files in all first-level subdirectories and all subdirectories they contain. The second will run tests in only the t/foo and t/foo/bar.
If you'd like to see the raw output of your tests, set the TEST_VERBOSE variable to true.
make test TEST_VERBOSE=1
If you want to run particular test files, set the TEST_FILES variable. It is possible to use globbing with this mechanism.
make test TEST_FILES='t/foobar.t t/dagobah*.t'
Windows users who are using nmake should note that due to a bug in nmake, when specifying TEST_FILES you must use back-slashes instead of forward-slashes.
nmake test TEST_FILES='t\foobar.t t\dagobah*.t'
A useful variation of the above is the target testdb. It runs the test under the Perl debugger (see perldebug). If the file test.pl exists in the current directory, it is used for the test.
If you want to debug some other testfile, set the TEST_FILE variable thusly:
make testdb TEST_FILE=t/mytest.t
By default the debugger is called using -d option to perl. If you want to specify some other option, set the TESTDB_SW variable:
make testdb TESTDB_SW=-Dx
make alone puts all relevant files into directories that are named by the macros INST_LIB, INST_ARCHLIB, INST_SCRIPT, INST_MAN1DIR and INST_MAN3DIR. All these default to something below ./blib if you are not building below the perl source directory. If you are building below the perl source, INST_LIB and INST_ARCHLIB default to ../../lib, and INST_SCRIPT is not defined.
The install target of the generated Makefile copies the files found below each of the INST_* directories to their INSTALL* counterparts. Which counterparts are chosen depends on the setting of INSTALLDIRS according to the following table:
INSTALLDIRS set to perl site vendor PERLPREFIX SITEPREFIX VENDORPREFIXINST_ARCHLIB INSTALLARCHLIB INSTALLSITEARCH INSTALLVENDORARCHINST_LIB INSTALLPRIVLIB INSTALLSITELIB INSTALLVENDORLIBINST_BIN INSTALLBIN INSTALLSITEBIN INSTALLVENDORBININST_SCRIPT INSTALLSCRIPT INSTALLSITESCRIPT INSTALLVENDORSCRIPTINST_MAN1DIR INSTALLMAN1DIR INSTALLSITEMAN1DIR INSTALLVENDORMAN1DIRINST_MAN3DIR INSTALLMAN3DIR INSTALLSITEMAN3DIR INSTALLVENDORMAN3DIR
The INSTALL... macros in turn default to their %Config ($Config{installprivlib}, $Config{installarchlib}, etc.) counterparts.
You can check the values of these variables on your system with
perl '-V:install.*'
And to check the sequence in which the library directories are searched by perl, run
perl -le 'print join $/, @INC'
Sometimes older versions of the module you're installing live in other directories in @INC. Because Perl loads the first version of a module it finds, not the newest, you might accidentally get one of these older versions even after installing a brand new version. To delete all other versions of the module you're installing (not simply older ones) set the UNINST variable.
make install UNINST=1
INSTALL_BASE can be passed into Makefile.PL to change where your module will be installed. INSTALL_BASE is more like what everyone else calls "prefix" than PREFIX is.
To have everything installed in your home directory, do the following.
# Unix users, INSTALL_BASE=~ works fine
perl Makefile.PL INSTALL_BASE=/path/to/your/home/dir
Like PREFIX, it sets several INSTALL* attributes at once. Unlike PREFIX it is easy to predict where the module will end up. The installation pattern looks like this:
INSTALLARCHLIB INSTALL_BASE/lib/perl5/$Config{archname}
INSTALLPRIVLIB INSTALL_BASE/lib/perl5
INSTALLBIN INSTALL_BASE/bin
INSTALLSCRIPT INSTALL_BASE/bin
INSTALLMAN1DIR INSTALL_BASE/man/man1
INSTALLMAN3DIR INSTALL_BASE/man/man3
INSTALL_BASE in MakeMaker and --install_base in Module::Build (as of 0.28) install to the same location. If you want MakeMaker and Module::Build to install to the same location simply set INSTALL_BASE and --install_base to the same location.
INSTALL_BASE was added in 6.31.
PREFIX and LIB can be used to set several INSTALL* attributes in one go. Here's an example for installing into your home directory.
# Unix users, PREFIX=~ works fine
perl Makefile.PL PREFIX=/path/to/your/home/dir
This will install all files in the module under your home directory, with man pages and libraries going into an appropriate place (usually ~/man and ~/lib). How the exact location is determined is complicated and depends on how your Perl was configured. INSTALL_BASE works more like what other build systems call "prefix" than PREFIX and we recommend you use that instead.
Another way to specify many INSTALL directories with a single parameter is LIB.
perl Makefile.PL LIB=~/lib
This will install the module's architecture-independent files into ~/lib, the architecture-dependent files into ~/lib/$archname.
Note, that in both cases the tilde expansion is done by MakeMaker, not by perl by default, nor by make.
Conflicts between parameters LIB, PREFIX and the various INSTALL* arguments are resolved so that:
setting LIB overrides any setting of INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSITELIB, INSTALLSITEARCH (and they are not affected by PREFIX);
without LIB, setting PREFIX replaces the initial $Config{prefix} part of those INSTALL* arguments, even if the latter are explicitly set (but are set to still start with $Config{prefix}).
If the user has superuser privileges, and is not working on AFS or relatives, then the defaults for INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSCRIPT, etc. will be appropriate, and this incantation will be the best:
perl Makefile.PL;
make;
make test
make install
make install by default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This feature can be bypassed by calling make pure_install.
will have to specify the installation directories as these most probably have changed since perl itself has been installed. They will have to do this by calling
perl Makefile.PL INSTALLSITELIB=/afs/here/today \
INSTALLSCRIPT=/afs/there/now INSTALLMAN3DIR=/afs/for/manpages
make
Be careful to repeat this procedure every time you recompile an extension, unless you are sure the AFS installation directories are still valid.
An extension that is built with the above steps is ready to use on systems supporting dynamic loading. On systems that do not support dynamic loading, any newly created extension has to be linked together with the available resources. MakeMaker supports the linking process by creating appropriate targets in the Makefile whenever an extension is built. You can invoke the corresponding section of the makefile with
make perl
That produces a new perl binary in the current directory with all extensions linked in that can be found in INST_ARCHLIB, SITELIBEXP, and PERL_ARCHLIB. To do that, MakeMaker writes a new Makefile, on UNIX, this is called Makefile.aperl (may be system dependent). If you want to force the creation of a new perl, it is recommended that you delete this Makefile.aperl, so the directories are searched through for linkable libraries again.
The binary can be installed into the directory where perl normally resides on your machine with
make inst_perl
To produce a perl binary with a different name than perl, either say
perl Makefile.PL MAP_TARGET=myperl
make myperl
make inst_perl
or say
perl Makefile.PL
make myperl MAP_TARGET=myperl
make inst_perl MAP_TARGET=myperl
In any case you will be prompted with the correct invocation of the inst_perl target that installs the new binary into INSTALLBIN.
make inst_perl by default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This can be bypassed by calling make pure_inst_perl.
Warning: the inst_perl: target will most probably overwrite your existing perl binary. Use with care!
Sometimes you might want to build a statically linked perl although your system supports dynamic loading. In this case you may explicitly set the linktype with the invocation of the Makefile.PL or make:
perl Makefile.PL LINKTYPE=static # recommended
or
make LINKTYPE=static # works on most systems
MakeMaker needs to know, or to guess, where certain things are located. Especially INST_LIB and INST_ARCHLIB (where to put the files during the make(1) run), PERL_LIB and PERL_ARCHLIB (where to read existing modules from), and PERL_INC (header files and libperl*.*).
Extensions may be built either using the contents of the perl source directory tree or from the installed perl library. The recommended way is to build extensions after you have run 'make install' on perl itself. You can do that in any directory on your hard disk that is not below the perl source tree. The support for extensions below the ext directory of the perl distribution is only good for the standard extensions that come with perl.
If an extension is being built below the ext/ directory of the perl source then MakeMaker will set PERL_SRC automatically (e.g., ../..). If PERL_SRC is defined and the extension is recognized as a standard extension, then other variables default to the following:
PERL_INC = PERL_SRC
PERL_LIB = PERL_SRC/lib
PERL_ARCHLIB = PERL_SRC/lib
INST_LIB = PERL_LIB
INST_ARCHLIB = PERL_ARCHLIB
If an extension is being built away from the perl source then MakeMaker will leave PERL_SRC undefined and default to using the installed copy of the perl library. The other variables default to the following:
PERL_INC = $archlibexp/CORE
PERL_LIB = $privlibexp
PERL_ARCHLIB = $archlibexp
INST_LIB = ./blib/lib
INST_ARCHLIB = ./blib/arch
If perl has not yet been installed then PERL_SRC can be defined on the command line as shown in the previous section.
If you don't want to keep the defaults for the INSTALL* macros, MakeMaker helps you to minimize the typing needed: the usual relationship between INSTALLPRIVLIB and INSTALLARCHLIB is determined by Configure at perl compilation time. MakeMaker supports the user who sets INSTALLPRIVLIB. If INSTALLPRIVLIB is set, but INSTALLARCHLIB not, then MakeMaker defaults the latter to be the same subdirectory of INSTALLPRIVLIB as Configure decided for the counterparts in %Config, otherwise it defaults to INSTALLPRIVLIB. The same relationship holds for INSTALLSITELIB and INSTALLSITEARCH.
MakeMaker gives you much more freedom than needed to configure internal variables and get different results. It is worth mentioning that make(1) also lets you configure most of the variables that are used in the Makefile. But in the majority of situations this will not be necessary, and should only be done if the author of a package recommends it (or you know what you're doing).
The following attributes may be specified as arguments to WriteMakefile() or as NAME=VALUE pairs on the command line. Attributes that became available with later versions of MakeMaker are indicated.
In order to maintain portability of attributes with older versions of MakeMaker you may want to use App::EUMM::Upgrade with your Makefile.PL.
One line description of the module. Will be included in PPD file.
Name of the file that contains the package description. MakeMaker looks for a line in the POD matching /^($package\s-\s)(.*)/. This is typically the first line in the "=head1 NAME" section. $2 becomes the abstract.
Array of strings containing name (and email address) of package author(s). Is used in CPAN Meta files (META.yml or META.json) and PPD (Perl Package Description) files for PPM (Perl Package Manager).
Used when creating PPD files for binary packages. It can be set to a full or relative path or URL to the binary archive for a particular architecture. For example:
perl Makefile.PL BINARY_LOCATION=x86/Agent.tar.gz
builds a PPD package that references a binary of the Agent package, located in the x86 directory relative to the PPD itself.
Available in version 6.55_03 and above.
A hash of modules that are needed to build your module but not run it.
This will go into the build_requires field of your META.yml and the build of the prereqs field of your META.json.
Defaults to { "ExtUtils::MakeMaker" => 0 } if this attribute is not specified.
The format is the same as PREREQ_PM.
Ref to array of *.c file names. Initialised from a directory scan and the values portion of the XS attribute hash. This is not currently used by MakeMaker but may be handy in Makefile.PLs.
String that will be included in the compiler call command line between the arguments INC and OPTIMIZE.
Arrayref. E.g. [qw(archname manext)] defines ARCHNAME & MANEXT from config.sh. MakeMaker will add to CONFIG the following values anyway: ar cc cccdlflags ccdlflags dlext dlsrc ld lddlflags ldflags libc lib_ext obj_ext ranlib sitelibexp sitearchexp so
CODE reference. The subroutine should return a hash reference. The hash may contain further attributes, e.g. {LIBS => ...}, that have to be determined by some evaluation method.
Available in version 6.52 and above.
A hash of modules that are required to run Makefile.PL itself, but not to run your distribution.
This will go into the configure_requires field of your META.yml and the configure of the prereqs field of your META.json.
Defaults to { "ExtUtils::MakeMaker" => 0 } if this attribute is not specified.
The format is the same as PREREQ_PM.
Something like "-DHAVE_UNISTD_H"
This is the root directory into which the code will be installed. It prepends itself to the normal prefix. For example, if your code would normally go into /usr/local/lib/perl you could set DESTDIR=~/tmp/ and installation would go into ~/tmp/usr/local/lib/perl.
This is primarily of use for people who repackage Perl modules.
NOTE: Due to the nature of make, it is important that you put the trailing slash on your DESTDIR. ~/tmp/ not ~/tmp.
Ref to array of subdirectories containing Makefile.PLs e.g. ['sdbm'] in ext/SDBM_File
A safe filename for the package.
Defaults to NAME below but with :: replaced with -.
For example, Foo::Bar becomes Foo-Bar.
Your name for distributing the package with the version number included. This is used by 'make dist' to name the resulting archive file.
Defaults to DISTNAME-VERSION.
For example, version 1.04 of Foo::Bar becomes Foo-Bar-1.04.
On some OS's where . has special meaning VERSION_SYM may be used in place of VERSION.
Specifies the extension of the module's loadable object. For example:
DLEXT => 'unusual_ext', # Default value is $Config{so}
NOTE: When using this option to alter the extension of a module's loadable object, it is also necessary that the module's pm file specifies the same change:
local $DynaLoader::dl_dlext = 'unusual_ext';
Hashref of symbol names for routines to be made available as universal symbols. Each key/value pair consists of the package name and an array of routine names in that package. Used only under AIX, OS/2, VMS and Win32 at present. The routine names supplied will be expanded in the same way as XSUB names are expanded by the XS() macro. Defaults to
{"$(NAME)" => ["boot_$(NAME)" ] }
e.g.
{"RPC" => [qw( boot_rpcb rpcb_gettime getnetconfigent )],
"NetconfigPtr" => [ 'DESTROY'] }
Please see the ExtUtils::Mksymlists documentation for more information about the DL_FUNCS, DL_VARS and FUNCLIST attributes.
Array of symbol names for variables to be made available as universal symbols. Used only under AIX, OS/2, VMS and Win32 at present. Defaults to []. (e.g. [ qw(Foo_version Foo_numstreams Foo_tree ) ])
Array of extension names to exclude when doing a static build. This is ignored if INCLUDE_EXT is present. Consult INCLUDE_EXT for more details. (e.g. [ qw( Socket POSIX ) ] )
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL EXCLUDE_EXT='Socket Safe'
Ref to array of executable files. The files will be copied to the INST_SCRIPT directory. Make realclean will delete them from there again.
If your executables start with something like #!perl or #!/usr/bin/perl MakeMaker will change this to the path of the perl 'Makefile.PL' was invoked with so the programs will be sure to run properly even if perl is not in /usr/bin/perl.
The name of the Makefile to be produced. This is used for the second Makefile that will be produced for the MAP_TARGET.
Defaults to 'Makefile' or 'Descrip.MMS' on VMS.
(Note: we couldn't use MAKEFILE because dmake uses this for something else).
Perl binary able to run this extension, load XS modules, etc...
Like PERLRUN, except it uses FULLPERL.
Like PERLRUNINST, except it uses FULLPERL.
This provides an alternate means to specify function names to be exported from the extension. Its value is a reference to an array of function names to be exported by the extension. These names are passed through unaltered to the linker options file.
Ref to array of *.h file names. Similar to C.
This attribute is used to specify names to be imported into the extension. Takes a hash ref.
It is only used on OS/2 and Win32.
Include file dirs eg: "-I/usr/5include -I/path/to/inc"
Array of extension names to be included when doing a static build. MakeMaker will normally build with all of the installed extensions when doing a static build, and that is usually the desired behavior. If INCLUDE_EXT is present then MakeMaker will build only with those extensions which are explicitly mentioned. (e.g. [ qw( Socket POSIX ) ])
It is not necessary to mention DynaLoader or the current extension when filling in INCLUDE_EXT. If the INCLUDE_EXT is mentioned but is empty then only DynaLoader and the current extension will be included in the build.
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL INCLUDE_EXT='POSIX Socket Devel::Peek'
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to perl.
Directory to install binary files (e.g. tkperl) into if INSTALLDIRS=perl.
Determines which of the sets of installation directories to choose: perl, site or vendor. Defaults to site.
These directories get the man pages at 'make install' time if INSTALLDIRS=perl. Defaults to $Config{installman*dir}.
If set to 'none', no man pages will be installed.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to perl.
Defaults to $Config{installprivlib}.
Available in version 6.30_02 and above.
Used by 'make install' which copies files from INST_SCRIPT to this directory if INSTALLDIRS=perl.
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to site (default).
These directories get the man pages at 'make install' time if INSTALLDIRS=site (default). Defaults to $(SITEPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Used by 'make install' which copies files from INST_SCRIPT to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to vendor. Note that if you do not set this, the value of INSTALLVENDORLIB will be used, which is probably not what you want.
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to vendor.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to vendor.
These directories get the man pages at 'make install' time if INSTALLDIRS=vendor. Defaults to $(VENDORPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Available in version 6.30_02 and above.
Used by 'make install' which copies files from INST_SCRIPT to this directory if INSTALLDIRS is set to vendor.
Same as INST_LIB for architecture dependent files.
Directory to put real binary files during 'make'. These will be copied to INSTALLBIN during 'make install'
Directory where we put library files of this extension while building it.
Directory to hold the man pages at 'make' time
Directory to hold the man pages at 'make' time
Directory where executable files should be installed during 'make'. Defaults to "./blib/script", just to have a dummy location during testing. make install will copy the files in INST_SCRIPT to INSTALLSCRIPT.
Program to be used to link libraries for dynamic loading.
Defaults to $Config{ld}.
Any special flags that might need to be passed to ld to create a shared library suitable for dynamic loading. It is up to the makefile to use it. (See "lddlflags" in Config)
Defaults to $Config{lddlflags}.
Defaults to "$(OBJECT)" and is used in the ld command to specify what files to link/load from (also see dynamic_lib below for how to specify ld flags)
LIB should only be set at perl Makefile.PL time but is allowed as a MakeMaker argument. It has the effect of setting both INSTALLPRIVLIB and INSTALLSITELIB to that value regardless any explicit setting of those arguments (or of PREFIX). INSTALLARCHLIB and INSTALLSITEARCH are set to the corresponding architecture subdirectory.
The filename of the perllibrary that will be used together with this extension. Defaults to libperl.a.
An anonymous array of alternative library specifications to be searched for (in order) until at least one library is found. E.g.
'LIBS' => ["-lgdbm", "-ldbm -lfoo", "-L/path -ldbm.nfs"]
Mind, that any element of the array contains a complete set of arguments for the ld command. So do not specify
'LIBS' => ["-ltcl", "-ltk", "-lX11"]
See ODBM_File/Makefile.PL for an example, where an array is needed. If you specify a scalar as in
'LIBS' => "-ltcl -ltk -lX11"
MakeMaker will turn it into an array with one element.
Available in version 6.31 and above.
The licensing terms of your distribution. Generally it's "perl_5" for the same license as Perl itself.
See CPAN::Meta::Spec for the list of options.
Defaults to "unknown".
'static' or 'dynamic' (default unless usedl=undef in config.sh). Should only be used to force static linking (also see linkext below).
Available in version 6.8305 and above.
When this is set to 1, OBJECT will be automagically derived from O_FILES.
Available in version 6.30_01 and above.
Variant of make you intend to run the generated Makefile with. This parameter lets Makefile.PL know what make quirks to account for when generating the Makefile.
MakeMaker also honors the MAKE environment variable. This parameter takes precedence.
Currently the only significant values are 'dmake' and 'nmake' for Windows users, instructing MakeMaker to generate a Makefile in the flavour of DMake ("Dennis Vadura's Make") or Microsoft NMake respectively.
Defaults to $Config{make}, which may go looking for a Make program in your environment.
How are you supposed to know what flavour of Make a Makefile has been generated for if you didn't specify a value explicitly? Search the generated Makefile for the definition of the MAKE variable, which is used to recursively invoke the Make utility. That will tell you what Make you're supposed to invoke the Makefile with.
Boolean which tells MakeMaker that it should include the rules to make a perl. This is handled automatically as a switch by MakeMaker. The user normally does not need it.
When 'make clean' or similar is run, the $(FIRST_MAKEFILE) will be backed up at this location.
Defaults to $(FIRST_MAKEFILE).old or $(FIRST_MAKEFILE)_old on VMS.
Hashref of pod-containing files. MakeMaker will default this to all EXE_FILES files that include POD directives. The files listed here will be converted to man pages and installed as was requested at Configure time.
This hash should map POD files (or scripts containing POD) to the man file names under the blib/man1/ directory, as in the following example:
MAN1PODS => {
'doc/command.pod' => 'blib/man1/command.1',
'scripts/script.pl' => 'blib/man1/script.1',
}
Hashref that assigns to *.pm and *.pod files the files into which the manpages are to be written. MakeMaker parses all *.pod and *.pm files for POD directives. Files that contain POD will be the default keys of the MAN3PODS hashref. These will then be converted to man pages during make and will be installed during make install.
Example similar to MAN1PODS.
If it is intended that a new perl binary be produced, this variable may hold a name for that binary. Defaults to perl
Available in version 6.46 and above.
A hashref of items to add to the CPAN Meta file (META.yml or META.json).
They differ in how they behave if they have the same key as the default metadata. META_ADD will override the default value with its own. META_MERGE will merge its value with the default.
Unless you want to override the defaults, prefer META_MERGE so as to get the advantage of any future defaults.
Where prereqs are concerned, if META_MERGE is used, prerequisites are merged with their counterpart WriteMakefile() argument (PREREQ_PM is merged into {prereqs}{runtime}{requires}, BUILD_REQUIRES into {prereqs}{build}{requires}, CONFIGURE_REQUIRES into {prereqs}{configure}{requires}, and TEST_REQUIRES into {prereqs}{test}{requires}). When prereqs are specified with META_ADD, the only prerequisites added to the file come from the metadata, not WriteMakefile() arguments.
Note that these configuration options are only used for generating META.yml and META.json -- they are NOT used for MYMETA.yml and MYMETA.json. Therefore data in these fields should NOT be used for dynamic (user-side) configuration.
By default CPAN Meta specification 1.4 is used. In order to use CPAN Meta specification 2.0, indicate with meta-spec the version you want to use.
META_MERGE => {
"meta-spec" => { version => 2 },
resources => {
repository => {
type => 'git',
url => 'git://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker.git',
web => 'https://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker',
},
},
},
Available in version 6.48 and above.
The minimum required version of Perl for this distribution.
Either the 5.006001 or the 5.6.1 format is acceptable.
If the extension links to a library that it builds, set this to the name of the library (see SDBM_File)
The package representing the distribution. For example, Test::More or ExtUtils::MakeMaker. It will be used to derive information about the distribution such as the "DISTNAME", installation locations within the Perl library and where XS files will be looked for by default (see "XS").
NAME must be a valid Perl package name and it must have an associated .pm file. For example, Foo::Bar is a valid NAME and there must exist Foo/Bar.pm. Any XS code should be in Bar.xs unless stated otherwise.
Your distribution must have a NAME.
MakeMaker will figure out if an extension contains linkable code anywhere down the directory tree, and will set this variable accordingly, but you can speed it up a very little bit if you define this boolean variable yourself.
Command so make does not print the literal commands it's running.
By setting it to an empty string you can generate a Makefile that prints all commands. Mainly used in debugging MakeMaker itself.
Defaults to @.
Boolean. Attribute to inhibit descending into subdirectories.
When true, suppresses the generation and addition to the MANIFEST of the META.yml and META.json module meta-data files during 'make distdir'.
Defaults to false.
Available in version 6.57_02 and above.
When true, suppresses the generation of MYMETA.yml and MYMETA.json module meta-data files during 'perl Makefile.PL'.
Defaults to false.
Available in version 6.7501 and above.
When true, suppresses the writing of packlist files for installs.
Defaults to false.
Available in version 6.7501 and above.
When true, suppresses the appending of installations to perllocal.
Defaults to false.
In general, any generated Makefile checks for the current version of MakeMaker and the version the Makefile was built under. If NO_VC is set, the version check is neglected. Do not write this into your Makefile.PL, use it interactively instead.
List of object files, defaults to '$(BASEEXT)$(OBJ_EXT)', but can be a long string or an array containing all object files, e.g. "tkpBind.o tkpButton.o tkpCanvas.o" or ["tkpBind.o", "tkpButton.o", "tkpCanvas.o"]
(Where BASEEXT is the last component of NAME, and OBJ_EXT is $Config{obj_ext}.)
Defaults to -O. Set it to -g to turn debugging on. The flag is passed to subdirectory makes.
Perl binary for tasks that can be done by miniperl. If it contains spaces or other shell metacharacters, it needs to be quoted in a way that protects them, since this value is intended to be inserted in a shell command line in the Makefile. E.g.:
# Perl executable lives in "C:/Program Files/Perl/bin"
# Normally you don't need to set this yourself!
$ perl Makefile.PL PERL='"C:/Program Files/Perl/bin/perl.exe" -w'
Set only when MakeMaker is building the extensions of the Perl core distribution.
The call to the program that is able to compile perlmain.c. Defaults to $(CC).
Same as for PERL_LIB, but for architecture dependent files.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_ARCHLIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
Directory containing the Perl library to use.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_LIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
defaults to 0. Should be set to TRUE if the extension can work with the memory allocation routines substituted by the Perl malloc() subsystem. This should be applicable to most extensions with exceptions of those
with bugs in memory allocations which are caught by Perl's malloc();
which interact with the memory allocator in other ways than via malloc(), realloc(), free(), calloc(), sbrk() and brk();
which rely on special alignment which is not provided by Perl's malloc().
NOTE. Neglecting to set this flag in any one of the loaded extension nullifies many advantages of Perl's malloc(), such as better usage of system resources, error detection, memory usage reporting, catchable failure of memory allocations, etc.
Directory under which core modules are to be installed.
Defaults to $Config{installprefixexp}, falling back to $Config{installprefix}, $Config{prefixexp} or $Config{prefix} should $Config{installprefixexp} not exist.
Overridden by PREFIX.
Use this instead of $(PERL) when you wish to run perl. It will set up extra necessary flags for you.
Use this instead of $(PERL) when you wish to run perl to work with modules. It will add things like -I$(INST_ARCH) and other necessary flags so perl can see the modules you're about to install.
Directory containing the Perl source code (use of this should be avoided, it may be undefined)
Available in version 6.51_01 and above.
Desired permission for directories. Defaults to 755.
Desired permission for read/writable files. Defaults to 644.
Desired permission for executable files. Defaults to 755.
MakeMaker can run programs to generate files for you at build time. By default any file named *.PL (except Makefile.PL and Build.PL) in the top level directory will be assumed to be a Perl program and run passing its own basename in as an argument. This basename is actually a build target, and there is an intention, but not a requirement, that the *.PL file make the file passed to to as an argument. For example...
perl foo.PL foo
This behavior can be overridden by supplying your own set of files to search. PL_FILES accepts a hash ref, the key being the file to run and the value is passed in as the first argument when the PL file is run.
PL_FILES => {'bin/foobar.PL' => 'bin/foobar'}
PL_FILES => {'foo.PL' => 'foo.c'}
Would run bin/foobar.PL like this:
perl bin/foobar.PL bin/foobar
If multiple files from one program are desired an array ref can be used.
PL_FILES => {'bin/foobar.PL' => [qw(bin/foobar1 bin/foobar2)]}
In this case the program will be run multiple times using each target file.
perl bin/foobar.PL bin/foobar1
perl bin/foobar.PL bin/foobar2
If an output file depends on extra input files beside the script itself, a hash ref can be used in version 7.36 and above:
PL_FILES => { 'foo.PL' => {
'foo.out' => 'foo.in',
'bar.out' => [qw(bar1.in bar2.in)],
}
In this case the extra input files will be passed to the program after the target file:
perl foo.PL foo.out foo.inperl foo.PL bar.out bar1.in bar2.in
PL files are normally run after pm_to_blib and include INST_LIB and INST_ARCH in their @INC, so the just built modules can be accessed... unless the PL file is making a module (or anything else in PM) in which case it is run before pm_to_blib and does not include INST_LIB and INST_ARCH in its @INC. This apparently odd behavior is there for backwards compatibility (and it's somewhat DWIM). The argument passed to the .PL is set up as a target to build in the Makefile. In other sections such as postamble you can specify a dependency on the filename/argument that the .PL is supposed (or will have, now that that is is a dependency) to generate. Note the file to be generated will still be generated and the .PL will still run even without an explicit dependency created by you, since the all target still depends on running all eligible to run.PL files.
Hashref of .pm files and *.pl files to be installed. e.g.
{'name_of_file.pm' => '$(INST_LIB)/install_as.pm'}
By default this will include *.pm and *.pl and the files found in the PMLIBDIRS directories. Defining PM in the Makefile.PL will override PMLIBDIRS.
Ref to array of subdirectories containing library files. Defaults to [ 'lib', $(BASEEXT) ]. The directories will be scanned and any files they contain will be installed in the corresponding location in the library. A libscan() method can be used to alter the behaviour. Defining PM in the Makefile.PL will override PMLIBDIRS.
(Where BASEEXT is the last component of NAME.)
A filter program, in the traditional Unix sense (input from stdin, output to stdout) that is passed on each .pm file during the build (in the pm_to_blib() phase). It is empty by default, meaning no filtering is done. You could use:
PM_FILTER => 'perl -ne "print unless /^\\#/"',
to remove all the leading comments on the fly during the build. In order to be as portable as possible, please consider using a Perl one-liner rather than Unix (or other) utilities, as above. The # is escaped for the Makefile, since what is going to be generated will then be:
PM_FILTER = perl -ne "print unless /^\#/"
Without the \ before the #, we'd have the start of a Makefile comment, and the macro would be incorrectly defined.
You will almost certainly be better off using the PL_FILES system, instead. See above, or the ExtUtils::MakeMaker::FAQ entry.
Release 5.005 grandfathered old global symbol names by providing preprocessor macros for extension source compatibility. As of release 5.6, these preprocessor definitions are not available by default. The POLLUTE flag specifies that the old names should still be defined:
perl Makefile.PL POLLUTE=1
Please inform the module author if this is necessary to successfully install a module under 5.6 or later.
Name of the executable used to run PPM_INSTALL_SCRIPT below. (e.g. perl)
Name of the script that gets executed by the Perl Package Manager after the installation of a package.
Available in version 6.8502 and above.
Name of the executable used to run PPM_UNINSTALL_SCRIPT below. (e.g. perl)
Available in version 6.8502 and above.
Name of the script that gets executed by the Perl Package Manager before the removal of a package.
This overrides all the default install locations. Man pages, libraries, scripts, etc... MakeMaker will try to make an educated guess about where to place things under the new PREFIX based on your Config defaults. Failing that, it will fall back to a structure which should be sensible for your platform.
If you specify LIB or any INSTALL* variables they will not be affected by the PREFIX.
Bool. If this parameter is true, failing to have the required modules (or the right versions thereof) will be fatal. perl Makefile.PL will die instead of simply informing the user of the missing dependencies.
It is extremely rare to have to use PREREQ_FATAL. Its use by module authors is strongly discouraged and should never be used lightly.
For dependencies that are required in order to run Makefile.PL, see CONFIGURE_REQUIRES.
Module installation tools have ways of resolving unmet dependencies but to do that they need a Makefile. Using PREREQ_FATAL breaks this. That's bad.
Assuming you have good test coverage, your tests should fail with missing dependencies informing the user more strongly that something is wrong. You can write a t/00compile.t test which will simply check that your code compiles and stop "make test" prematurely if it doesn't. See "BAIL_OUT" in Test::More for more details.
A hash of modules that are needed to run your module. The keys are the module names ie. Test::More, and the minimum version is the value. If the required version number is 0 any version will do. The versions given may be a Perl v-string (see version) or a range (see CPAN::Meta::Requirements).
This will go into the requires field of your META.yml and the runtime of the prereqs field of your META.json.
PREREQ_PM => {
# Require Test::More at least 0.47
"Test::More" => "0.47",
# Require any version of Acme::Buffy
"Acme::Buffy" => 0,
}
Bool. If this parameter is true, the prerequisites will be printed to stdout and MakeMaker will exit. The output format is an evalable hash ref.
$PREREQ_PM = {
'A::B' => Vers1,
'C::D' => Vers2,
...
};
If a distribution defines a minimal required perl version, this is added to the output as an additional line of the form:
$MIN_PERL_VERSION = '5.008001';
If BUILD_REQUIRES is not empty, it will be dumped as $BUILD_REQUIRES hashref.
RedHatism for PREREQ_PRINT. The output format is different, though:
perl(A::B)>=Vers1 perl(C::D)>=Vers2 ...
A minimal required perl version, if present, will look like this:
perl(perl)>=5.008001
Like PERLPREFIX, but only for the site install locations.
Defaults to $Config{siteprefixexp}. Perls prior to 5.6.0 didn't have an explicit siteprefix in the Config. In those cases $Config{installprefix} will be used.
Overridable by PREFIX
Available in version 6.18 and above.
When true, perform the generation and addition to the MANIFEST of the SIGNATURE file in the distdir during 'make distdir', via 'cpansign -s'.
Note that you need to install the Module::Signature module to perform this operation.
Defaults to false.
Arrayref. E.g. [qw(name1 name2)] skip (do not write) sections of the Makefile. Caution! Do not use the SKIP attribute for the negligible speedup. It may seriously damage the resulting Makefile. Only use it if you really need it.
Available in version 6.64 and above.
A hash of modules that are needed to test your module but not run or build it.
This will go into the build_requires field of your META.yml and the test of the prereqs field of your META.json.
The format is the same as PREREQ_PM.
Ref to array of typemap file names. Use this when the typemaps are in some directory other than the current directory or when they are not named typemap. The last typemap in the list takes precedence. A typemap in the current directory has highest precedence, even if it isn't listed in TYPEMAPS. The default system typemap has lowest precedence.
Like PERLPREFIX, but only for the vendor install locations.
Defaults to $Config{vendorprefixexp}.
Overridable by PREFIX
If true, make install will be verbose
Your version number for distributing the package. This defaults to 0.1.
Instead of specifying the VERSION in the Makefile.PL you can let MakeMaker parse a file to determine the version number. The parsing routine requires that the file named by VERSION_FROM contains one single line to compute the version number. The first line in the file that contains something like a $VERSION assignment or package Name VERSION will be used. The following lines will be parsed o.k.:
# Good
package Foo::Bar 1.23; # 1.23
$VERSION = '1.00'; # 1.00
*VERSION = \'1.01'; # 1.01
($VERSION) = q$Revision$ =~ /(\d+)/g; # The digits in $Revision$
$FOO::VERSION = '1.10'; # 1.10
*FOO::VERSION = \'1.11'; # 1.11
but these will fail:
# Bad
my $VERSION = '1.01';
local $VERSION = '1.02';
local $FOO::VERSION = '1.30';
(Putting my or local on the preceding line will work o.k.)
"Version strings" are incompatible and should not be used.
# Bad
$VERSION = 1.2.3;
$VERSION = v1.2.3;
version objects are fine. As of MakeMaker 6.35 version.pm will be automatically loaded, but you must declare the dependency on version.pm. For compatibility with older MakeMaker you should load on the same line as $VERSION is declared.
# All on one line
use version; our $VERSION = qv(1.2.3);
The file named in VERSION_FROM is not added as a dependency to Makefile. This is not really correct, but it would be a major pain during development to have to rewrite the Makefile for any smallish change in that file. If you want to make sure that the Makefile contains the correct VERSION macro after any change of the file, you would have to do something like
depend => { Makefile => '$(VERSION_FROM)' }
See attribute depend below.
A sanitized VERSION with . replaced by _. For places where . has special meaning (some filesystems, RCS labels, etc...)
Hashref of .xs files. MakeMaker will default this. e.g.
{'name_of_file.xs' => 'name_of_file.c'}
The .c files will automatically be included in the list of files deleted by a make clean.
Available in version 7.12 and above.
Hashref with options controlling the operation of XSMULTI:
{
xs => {
all => {
# options applying to all .xs files for this distribution
},
'lib/Class/Name/File' => { # specifically for this file
DEFINE => '-Dfunktastic', # defines for only this file
INC => "-I$funkyliblocation", # include flags for only this file
# OBJECT => 'lib/Class/Name/File$(OBJ_EXT)', # default
LDFROM => "lib/Class/Name/File\$(OBJ_EXT) $otherfile\$(OBJ_EXT)", # what's linked
},
},
}
Note xs is the file-extension. More possibilities may arise in the future. Note that object names are specified without their XS extension.
LDFROM defaults to the same as OBJECT. OBJECT defaults to, for XSMULTI, just the XS filename with the extension replaced with the compiler-specific object-file extension.
The distinction between OBJECT and LDFROM: OBJECT is the make target, so make will try to build it. However, LDFROM is what will actually be linked together to make the shared object or static library (SO/SL), so if you override it, make sure it includes what you want to make the final SO/SL, almost certainly including the XS basename with $(OBJ_EXT) appended.
Available in version 7.12 and above.
When this is set to 1, multiple XS files may be placed under lib/ next to their corresponding *.pm files (this is essential for compiling with the correct VERSION values). This feature should be considered experimental, and details of it may change.
This feature was inspired by, and small portions of code copied from, ExtUtils::MakeMaker::BigHelper. Hopefully this feature will render that module mainly obsolete.
String of options to pass to xsubpp. This might include -C++ or -extern. Do not include typemaps here; the TYPEMAP parameter exists for that purpose.
May be set to -protoypes, -noprototypes or the empty string. The empty string is equivalent to the xsubpp default, or -noprototypes. See the xsubpp documentation for details. MakeMaker defaults to the empty string.
Your version number for the .xs file of this package. This defaults to the value of the VERSION attribute.
can be used to pass parameters to the methods which implement that part of the Makefile. Parameters are specified as a hash ref but are passed to the method as a hash.
{FILES => "*.xyz foo"}
{ANY_TARGET => ANY_DEPENDENCY, ...}
(ANY_TARGET must not be given a double-colon rule by MakeMaker.)
{TARFLAGS => 'cvfF', COMPRESS => 'gzip', SUFFIX => '.gz',
SHAR => 'shar -m', DIST_CP => 'ln', ZIP => '/bin/zip',
ZIPFLAGS => '-rl', DIST_DEFAULT => 'private tardist' }
If you specify COMPRESS, then SUFFIX should also be altered, as it is needed to tell make the target file of the compression. Setting DIST_CP to ln can be useful, if you need to preserve the timestamps on your files. DIST_CP can take the values 'cp', which copies the file, 'ln', which links the file, and 'best' which copies symbolic links and links the rest. Default is 'best'.
{ARMAYBE => 'ar', OTHERLDFLAGS => '...', INST_DYNAMIC_DEP => '...'}
{LINKTYPE => 'static', 'dynamic' or ''}
NB: Extensions that have nothing but *.pm files had to say
{LINKTYPE => ''}
with Pre-5.0 MakeMakers. Since version 5.00 of MakeMaker such a line can be deleted safely. MakeMaker recognizes when there's nothing to be linked.
{ANY_MACRO => ANY_VALUE, ...}
Anything put here will be passed to MY::postamble() if you have one.
{FILES => '$(INST_ARCHAUTODIR)/*.xyz'}
Specify the targets for testing.
{TESTS => 't/*.t'}
RECURSIVE_TEST_FILES can be used to include all directories recursively under t that contain .t files. It will be ignored if you provide your own TESTS attribute, defaults to false.
{RECURSIVE_TEST_FILES=>1}
This is supported since 6.76
{MAXLEN => 8}
If you cannot achieve the desired Makefile behaviour by specifying attributes you may define private subroutines in the Makefile.PL. Each subroutine returns the text it wishes to have written to the Makefile. To override a section of the Makefile you can either say:
sub MY::c_o { "new literal text" }
or you can edit the default by saying something like:
package MY; # so that "SUPER" works right
sub c_o {
my $inherited = shift->SUPER::c_o(@_);
$inherited =~ s/old text/new text/;
$inherited;
}
If you are running experiments with embedding perl as a library into other applications, you might find MakeMaker is not sufficient. You'd better have a look at ExtUtils::Embed which is a collection of utilities for embedding.
If you still need a different solution, try to develop another subroutine that fits your needs and submit the diffs to makemaker@perl.org
For a complete description of all MakeMaker methods see ExtUtils::MM_Unix.
Here is a simple example of how to add a new target to the generated Makefile:
sub MY::postamble {
return <<'MAKE_FRAG';
$(MYEXTLIB): sdbm/Makefile
cd sdbm && $(MAKE) all
MAKE_FRAG
}
WriteMakefile() now does some basic sanity checks on its parameters to protect against typos and malformatted values. This means some things which happened to work in the past will now throw warnings and possibly produce internal errors.
Some of the most common mistakes:
MAN3PODS => ' '
This is commonly used to suppress the creation of man pages. MAN3PODS takes a hash ref not a string, but the above worked by accident in old versions of MakeMaker.
The correct code is MAN3PODS => { }.
MakeMaker.pm uses the architecture-specific information from Config.pm. In addition it evaluates architecture specific hints files in a hints/ directory. The hints files are expected to be named like their counterparts in PERL_SRC/hints, but with an .pl file name extension (eg. next_3_2.pl). They are simply evaled by MakeMaker within the WriteMakefile() subroutine, and can be used to execute commands as well as to include special variables. The rules which hintsfile is chosen are the same as in Configure.
The hintsfile is eval()ed immediately after the arguments given to WriteMakefile are stuffed into a hash reference $self but before this reference becomes blessed. So if you want to do the equivalent to override or create an attribute you would say something like
$self->{LIBS} = ['-ldbm -lucb -lc'];
For authors of extensions MakeMaker provides several Makefile targets. Most of the support comes from the ExtUtils::Manifest module, where additional documentation can be found.
reports which files are below the build directory but not in the MANIFEST file and vice versa. (See "fullcheck" in ExtUtils::Manifest for details)
reports which files are skipped due to the entries in the MANIFEST.SKIP file (See "skipcheck" in ExtUtils::Manifest for details)
does a realclean first and then the distcheck. Note that this is not needed to build a new distribution as long as you are sure that the MANIFEST file is ok.
does a realclean first and then removes backup files such as *~, *.bak, *.old and *.orig
rewrites the MANIFEST file, adding all remaining files found (See "mkmanifest" in ExtUtils::Manifest for details)
Copies all the files that are in the MANIFEST file to a newly created directory with the name $(DISTNAME)-$(VERSION). If that directory exists, it will be removed first.
Additionally, it will create META.yml and META.json module meta-data file in the distdir and add this to the distdir's MANIFEST. You can shut this behavior off with the NO_META flag.
Makes a distdir first, and runs a perl Makefile.PL, a make, and a make test in that directory.
First does a distdir. Then a command $(PREOP) which defaults to a null command, followed by $(TO_UNIX), which defaults to a null command under UNIX, and will convert files in distribution directory to UNIX format otherwise. Next it runs tar on that directory into a tarfile and deletes the directory. Finishes with a command $(POSTOP) which defaults to a null command.
Defaults to $(DIST_DEFAULT) which in turn defaults to tardist.
Runs a tardist first and uuencodes the tarfile.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Next it runs shar on that directory into a sharfile and deletes the intermediate directory again. Finishes with a command $(POSTOP) which defaults to a null command. Note: For shdist to work properly a shar program that can handle directories is mandatory.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Runs $(ZIP) $(ZIPFLAGS) on that directory into a zipfile. Then deletes that directory. Finishes with a command $(POSTOP) which defaults to a null command.
Does a $(CI) and a $(RCS_LABEL) on all files in the MANIFEST file.
Customization of the dist targets can be done by specifying a hash reference to the dist attribute of the WriteMakefile call. The following parameters are recognized:
CI ('ci -u')
COMPRESS ('gzip --best')
POSTOP ('@ :')
PREOP ('@ :')
TO_UNIX (depends on the system)
RCS_LABEL ('rcs -q -Nv$(VERSION_SYM):')
SHAR ('shar')
SUFFIX ('.gz')
TAR ('tar')
TARFLAGS ('cvf')
ZIP ('zip')
ZIPFLAGS ('-r')
An example:
WriteMakefile(
...other options...
dist => {
COMPRESS => "bzip2",
SUFFIX => ".bz2"
}
);
Long plaguing users of MakeMaker based modules has been the problem of getting basic information about the module out of the sources without running the Makefile.PL and doing a bunch of messy heuristics on the resulting Makefile. Over the years, it has become standard to keep this information in one or more CPAN Meta files distributed with each distribution.
The original format of CPAN Meta files was YAML and the corresponding file was called META.yml. In 2010, version 2 of the CPAN::Meta::Spec was released, which mandates JSON format for the metadata in order to overcome certain compatibility issues between YAML serializers and to avoid breaking older clients unable to handle a new version of the spec. The CPAN::Meta library is now standard for accessing old and new-style Meta files.
If CPAN::Meta is installed, MakeMaker will automatically generate META.json and META.yml files for you and add them to your MANIFEST as part of the 'distdir' target (and thus the 'dist' target). This is intended to seamlessly and rapidly populate CPAN with module meta-data. If you wish to shut this feature off, set the NO_META WriteMakefile() flag to true.
At the 2008 QA Hackathon in Oslo, Perl module toolchain maintainers agreed to use the CPAN Meta format to communicate post-configuration requirements between toolchain components. These files, MYMETA.json and MYMETA.yml, are generated when Makefile.PL generates a Makefile (if CPAN::Meta is installed). Clients like CPAN or CPANPLUS will read these files to see what prerequisites must be fulfilled before building or testing the distribution. If you wish to shut this feature off, set the NO_MYMETA WriteMakeFile() flag to true.
If some events detected in Makefile.PL imply that there is no way to create the Module, but this is a normal state of things, then you can create a Makefile which does nothing, but succeeds on all the "usual" build targets. To do so, use
use ExtUtils::MakeMaker qw(WriteEmptyMakefile);
WriteEmptyMakefile();
instead of WriteMakefile().
This may be useful if other modules expect this module to be built OK, as opposed to work OK (say, this system-dependent module builds in a subdirectory of some other distribution, or is listed as a dependency in a CPAN::Bundle, but the functionality is supported by different means on the current architecture).
my $value = prompt($message);
my $value = prompt($message, $default);
The prompt() function provides an easy way to request user input used to write a makefile. It displays the $message as a prompt for input. If a $default is provided it will be used as a default. The function returns the $value selected by the user.
If prompt() detects that it is not running interactively and there is nothing on STDIN or if the PERL_MM_USE_DEFAULT environment variable is set to true, the $default will be used without prompting. This prevents automated processes from blocking on user input.
If no $default is provided an empty string will be used instead.
os_unsupported();
os_unsupported if $^O eq 'MSWin32';
The os_unsupported() function provides a way to correctly exit your Makefile.PL before calling WriteMakefile. It is essentially a die with the message "OS unsupported".
This is supported since 7.26
Please note that while this module works on Perl 5.6, it is no longer being routinely tested on 5.6 - the earliest Perl version being routinely tested, and expressly supported, is 5.8.1. However, patches to repair any breakage on 5.6 are still being accepted.
Command line options used by MakeMaker->new(), and thus by WriteMakefile(). The string is split as the shell would, and the result is processed before any actual command line arguments are processed.
PERL_MM_OPT='CCFLAGS="-Wl,-rpath -Wl,/foo/bar/lib" LIBS="-lwibble -lwobble"'
If set to a true value then MakeMaker's prompt function will always return the default without waiting for user input.
Same as the PERL_CORE parameter. The parameter overrides this.
Module::Build is a pure-Perl alternative to MakeMaker which does not rely on make or any other external utility. It may be easier to extend to suit your needs.
Module::Build::Tiny is a minimal pure-Perl alternative to MakeMaker that follows the Build.PL protocol of Module::Build but without its complexity and cruft, implementing only the installation of the module and leaving authoring to mbtiny or other authoring tools.
Module::Install is a (now discouraged) wrapper around MakeMaker which adds features not normally available.
File::ShareDir::Install makes it easy to install static, sometimes also referred to as 'shared' files. File::ShareDir helps accessing the shared files after installation. Test::File::ShareDir helps when writing tests to use the shared files both before and after installation.
Dist::Zilla is an authoring tool which allows great customization and extensibility of the author experience, relying on the existing install tools like ExtUtils::MakeMaker only for installation.
Dist::Milla is a Dist::Zilla bundle that greatly simplifies common usage.
Minilla is a minimal authoring tool that does the same things as Dist::Milla without the overhead of Dist::Zilla.
Andy Dougherty doughera@lafayette.edu, Andreas König andreas.koenig@mind.de, Tim Bunce timb@cpan.org. VMS support by Charles Bailey bailey@newman.upenn.edu. OS/2 support by Ilya Zakharevich ilya@math.ohio-state.edu.
Currently maintained by Michael G Schwern schwern@pobox.com
Send patches and ideas to makemaker@perl.org.
Send bug reports via http://rt.cpan.org/. Please send your generated Makefile along with your report.
For more up-to-date information, see https://metacpan.org/release/ExtUtils-MakeMaker.
Repository available at https://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
ExtUtils::MakeMaker - Create a module Makefile
use ExtUtils::MakeMaker;
WriteMakefile( ATTRIBUTE => VALUE [, ...] );
This utility is designed to write a Makefile for an extension module from a Makefile.PL. It is based on the Makefile.SH model provided by Andy Dougherty and the perl5-porters.
It splits the task of generating the Makefile into several subroutines that can be individually overridden. Each subroutine returns the text it wishes to have written to the Makefile.
MakeMaker is object oriented. Each directory below the current directory that contains a Makefile.PL is treated as a separate object. This makes it possible to write an unlimited number of Makefiles with a single invocation of WriteMakefile().
See ExtUtils::MakeMaker::Tutorial.
The long answer is the rest of the manpage :-)
The generated Makefile enables the user of the extension to invoke
perl Makefile.PL # optionally "perl Makefile.PL verbose"
make
make test # optionally set TEST_VERBOSE=1
make install # See below
The Makefile to be produced may be altered by adding arguments of the form KEY=VALUE. E.g.
perl Makefile.PL PREFIX=/tmp/myperl5
Other interesting targets in the generated Makefile are
make config # to check if the Makefile is up-to-date
make clean # delete local temp files (Makefile gets renamed)
make realclean # delete derived files (including ./blib)
make ci # check in all the files in the MANIFEST file
make dist # see below the Distribution Support section
MakeMaker checks for the existence of a file named test.pl in the current directory and if it exists it execute the script with the proper set of perl -I options.
MakeMaker also checks for any files matching glob("t/*.t"). It will execute all matching files in alphabetical order via the Test::Harness module with the -I switches set correctly.
If you'd like to see the raw output of your tests, set the TEST_VERBOSE variable to true.
make test TEST_VERBOSE=1
A useful variation of the above is the target testdb. It runs the test under the Perl debugger (see perldebug). If the file test.pl exists in the current directory, it is used for the test.
If you want to debug some other testfile, set the TEST_FILE variable thusly:
make testdb TEST_FILE=t/mytest.t
By default the debugger is called using -d option to perl. If you want to specify some other option, set the TESTDB_SW variable:
make testdb TESTDB_SW=-Dx
make alone puts all relevant files into directories that are named by the macros INST_LIB, INST_ARCHLIB, INST_SCRIPT, INST_MAN1DIR and INST_MAN3DIR. All these default to something below ./blib if you are not building below the perl source directory. If you are building below the perl source, INST_LIB and INST_ARCHLIB default to ../../lib, and INST_SCRIPT is not defined.
The install target of the generated Makefile copies the files found below each of the INST_* directories to their INSTALL* counterparts. Which counterparts are chosen depends on the setting of INSTALLDIRS according to the following table:
INSTALLDIRS set to perl site vendor PERLPREFIX SITEPREFIX VENDORPREFIXINST_ARCHLIB INSTALLARCHLIB INSTALLSITEARCH INSTALLVENDORARCHINST_LIB INSTALLPRIVLIB INSTALLSITELIB INSTALLVENDORLIBINST_BIN INSTALLBIN INSTALLSITEBIN INSTALLVENDORBININST_SCRIPT INSTALLSCRIPT INSTALLSCRIPT INSTALLSCRIPTINST_MAN1DIR INSTALLMAN1DIR INSTALLSITEMAN1DIR INSTALLVENDORMAN1DIRINST_MAN3DIR INSTALLMAN3DIR INSTALLSITEMAN3DIR INSTALLVENDORMAN3DIR
The INSTALL... macros in turn default to their %Config ($Config{installprivlib}, $Config{installarchlib}, etc.) counterparts.
You can check the values of these variables on your system with
perl '-V:install.*'
And to check the sequence in which the library directories are searched by perl, run
perl -le 'print join $/, @INC'
PREFIX and LIB can be used to set several INSTALL* attributes in one go. The quickest way to install a module in a non-standard place might be
perl Makefile.PL PREFIX=~
This will install all files in the module under your home directory, with man pages and libraries going into an appropriate place (usually ~/man and ~/lib).
Another way to specify many INSTALL directories with a single parameter is LIB.
perl Makefile.PL LIB=~/lib
This will install the module's architecture-independent files into ~/lib, the architecture-dependent files into ~/lib/$archname.
Note, that in both cases the tilde expansion is done by MakeMaker, not by perl by default, nor by make.
Conflicts between parameters LIB, PREFIX and the various INSTALL* arguments are resolved so that:
setting LIB overrides any setting of INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSITELIB, INSTALLSITEARCH (and they are not affected by PREFIX);
without LIB, setting PREFIX replaces the initial $Config{prefix} part of those INSTALL* arguments, even if the latter are explicitly set (but are set to still start with $Config{prefix}).
If the user has superuser privileges, and is not working on AFS or relatives, then the defaults for INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSCRIPT, etc. will be appropriate, and this incantation will be the best:
perl Makefile.PL;
make;
make test
make install
make install per default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This feature can be bypassed by calling make pure_install.
will have to specify the installation directories as these most probably have changed since perl itself has been installed. They will have to do this by calling
perl Makefile.PL INSTALLSITELIB=/afs/here/today \
INSTALLSCRIPT=/afs/there/now INSTALLMAN3DIR=/afs/for/manpages
make
Be careful to repeat this procedure every time you recompile an extension, unless you are sure the AFS installation directories are still valid.
An extension that is built with the above steps is ready to use on systems supporting dynamic loading. On systems that do not support dynamic loading, any newly created extension has to be linked together with the available resources. MakeMaker supports the linking process by creating appropriate targets in the Makefile whenever an extension is built. You can invoke the corresponding section of the makefile with
make perl
That produces a new perl binary in the current directory with all extensions linked in that can be found in INST_ARCHLIB, SITELIBEXP, and PERL_ARCHLIB. To do that, MakeMaker writes a new Makefile, on UNIX, this is called Makefile.aperl (may be system dependent). If you want to force the creation of a new perl, it is recommended, that you delete this Makefile.aperl, so the directories are searched-through for linkable libraries again.
The binary can be installed into the directory where perl normally resides on your machine with
make inst_perl
To produce a perl binary with a different name than perl, either say
perl Makefile.PL MAP_TARGET=myperl
make myperl
make inst_perl
or say
perl Makefile.PL
make myperl MAP_TARGET=myperl
make inst_perl MAP_TARGET=myperl
In any case you will be prompted with the correct invocation of the inst_perl target that installs the new binary into INSTALLBIN.
make inst_perl per default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This can be bypassed by calling make pure_inst_perl.
Warning: the inst_perl: target will most probably overwrite your existing perl binary. Use with care!
Sometimes you might want to build a statically linked perl although your system supports dynamic loading. In this case you may explicitly set the linktype with the invocation of the Makefile.PL or make:
perl Makefile.PL LINKTYPE=static # recommended
or
make LINKTYPE=static # works on most systems
MakeMaker needs to know, or to guess, where certain things are located. Especially INST_LIB and INST_ARCHLIB (where to put the files during the make(1) run), PERL_LIB and PERL_ARCHLIB (where to read existing modules from), and PERL_INC (header files and libperl*.*).
Extensions may be built either using the contents of the perl source directory tree or from the installed perl library. The recommended way is to build extensions after you have run 'make install' on perl itself. You can do that in any directory on your hard disk that is not below the perl source tree. The support for extensions below the ext directory of the perl distribution is only good for the standard extensions that come with perl.
If an extension is being built below the ext/ directory of the perl source then MakeMaker will set PERL_SRC automatically (e.g., ../..). If PERL_SRC is defined and the extension is recognized as a standard extension, then other variables default to the following:
PERL_INC = PERL_SRC
PERL_LIB = PERL_SRC/lib
PERL_ARCHLIB = PERL_SRC/lib
INST_LIB = PERL_LIB
INST_ARCHLIB = PERL_ARCHLIB
If an extension is being built away from the perl source then MakeMaker will leave PERL_SRC undefined and default to using the installed copy of the perl library. The other variables default to the following:
PERL_INC = $archlibexp/CORE
PERL_LIB = $privlibexp
PERL_ARCHLIB = $archlibexp
INST_LIB = ./blib/lib
INST_ARCHLIB = ./blib/arch
If perl has not yet been installed then PERL_SRC can be defined on the command line as shown in the previous section.
If you don't want to keep the defaults for the INSTALL* macros, MakeMaker helps you to minimize the typing needed: the usual relationship between INSTALLPRIVLIB and INSTALLARCHLIB is determined by Configure at perl compilation time. MakeMaker supports the user who sets INSTALLPRIVLIB. If INSTALLPRIVLIB is set, but INSTALLARCHLIB not, then MakeMaker defaults the latter to be the same subdirectory of INSTALLPRIVLIB as Configure decided for the counterparts in %Config , otherwise it defaults to INSTALLPRIVLIB. The same relationship holds for INSTALLSITELIB and INSTALLSITEARCH.
MakeMaker gives you much more freedom than needed to configure internal variables and get different results. It is worth to mention, that make(1) also lets you configure most of the variables that are used in the Makefile. But in the majority of situations this will not be necessary, and should only be done if the author of a package recommends it (or you know what you're doing).
The following attributes may be specified as arguments to WriteMakefile() or as NAME=VALUE pairs on the command line.
One line description of the module. Will be included in PPD file.
Name of the file that contains the package description. MakeMaker looks for a line in the POD matching /^($package\s-\s)(.*)/. This is typically the first line in the "=head1 NAME" section. $2 becomes the abstract.
String containing name (and email address) of package author(s). Is used in PPD (Perl Package Description) files for PPM (Perl Package Manager).
Used when creating PPD files for binary packages. It can be set to a full or relative path or URL to the binary archive for a particular architecture. For example:
perl Makefile.PL BINARY_LOCATION=x86/Agent.tar.gz
builds a PPD package that references a binary of the Agent package, located in the x86 directory relative to the PPD itself.
Ref to array of *.c file names. Initialised from a directory scan and the values portion of the XS attribute hash. This is not currently used by MakeMaker but may be handy in Makefile.PLs.
String that will be included in the compiler call command line between the arguments INC and OPTIMIZE.
Arrayref. E.g. [qw(archname manext)] defines ARCHNAME & MANEXT from config.sh. MakeMaker will add to CONFIG the following values anyway: ar cc cccdlflags ccdlflags dlext dlsrc ld lddlflags ldflags libc lib_ext obj_ext ranlib sitelibexp sitearchexp so
CODE reference. The subroutine should return a hash reference. The hash may contain further attributes, e.g. {LIBS => ...}, that have to be determined by some evaluation method.
Something like "-DHAVE_UNISTD_H"
This is the root directory into which the code will be installed. It prepends itself to the normal prefix. For example, if your code would normally go into /usr/local/lib/perl you could set DESTDIR=/tmp/ and installation would go into /tmp/usr/local/lib/perl.
This is primarily of use for people who repackage Perl modules.
NOTE: Due to the nature of make, it is important that you put the trailing slash on your DESTDIR. "/tmp/" not "/tmp".
Ref to array of subdirectories containing Makefile.PLs e.g. [ 'sdbm' ] in ext/SDBM_File
A safe filename for the package.
Defaults to NAME above but with :: replaced with -.
For example, Foo::Bar becomes Foo-Bar.
Your name for distributing the package with the version number included. This is used by 'make dist' to name the resulting archive file.
Defaults to DISTNAME-VERSION.
For example, version 1.04 of Foo::Bar becomes Foo-Bar-1.04.
On some OS's where . has special meaning VERSION_SYM may be used in place of VERSION.
Hashref of symbol names for routines to be made available as universal symbols. Each key/value pair consists of the package name and an array of routine names in that package. Used only under AIX, OS/2, VMS and Win32 at present. The routine names supplied will be expanded in the same way as XSUB names are expanded by the XS() macro. Defaults to
{"$(NAME)" => ["boot_$(NAME)" ] }
e.g.
{"RPC" => [qw( boot_rpcb rpcb_gettime getnetconfigent )],
"NetconfigPtr" => [ 'DESTROY'] }
Please see the ExtUtils::Mksymlists documentation for more information about the DL_FUNCS, DL_VARS and FUNCLIST attributes.
Array of symbol names for variables to be made available as universal symbols. Used only under AIX, OS/2, VMS and Win32 at present. Defaults to []. (e.g. [ qw(Foo_version Foo_numstreams Foo_tree ) ])
Array of extension names to exclude when doing a static build. This is ignored if INCLUDE_EXT is present. Consult INCLUDE_EXT for more details. (e.g. [ qw( Socket POSIX ) ] )
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL EXCLUDE_EXT='Socket Safe'
Ref to array of executable files. The files will be copied to the INST_SCRIPT directory. Make realclean will delete them from there again.
If your executables start with something like #!perl or #!/usr/bin/perl MakeMaker will change this to the path of the perl 'Makefile.PL' was invoked with so the programs will be sure to run properly even if perl is not in /usr/bin/perl.
The name of the Makefile to be produced. This is used for the second Makefile that will be produced for the MAP_TARGET.
Defaults to 'Makefile' or 'Descrip.MMS' on VMS.
(Note: we couldn't use MAKEFILE because dmake uses this for something else).
Perl binary able to run this extension, load XS modules, etc...
Like PERLRUN, except it uses FULLPERL.
Like PERLRUNINST, except it uses FULLPERL.
This provides an alternate means to specify function names to be exported from the extension. Its value is a reference to an array of function names to be exported by the extension. These names are passed through unaltered to the linker options file.
Ref to array of *.h file names. Similar to C.
This attribute is used to specify names to be imported into the extension. Takes a hash ref.
It is only used on OS/2 and Win32.
Include file dirs eg: "-I/usr/5include -I/path/to/inc"
Array of extension names to be included when doing a static build. MakeMaker will normally build with all of the installed extensions when doing a static build, and that is usually the desired behavior. If INCLUDE_EXT is present then MakeMaker will build only with those extensions which are explicitly mentioned. (e.g. [ qw( Socket POSIX ) ])
It is not necessary to mention DynaLoader or the current extension when filling in INCLUDE_EXT. If the INCLUDE_EXT is mentioned but is empty then only DynaLoader and the current extension will be included in the build.
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL INCLUDE_EXT='POSIX Socket Devel::Peek'
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to perl.
Directory to install binary files (e.g. tkperl) into if INSTALLDIRS=perl.
Determines which of the sets of installation directories to choose: perl, site or vendor. Defaults to site.
These directories get the man pages at 'make install' time if INSTALLDIRS=perl. Defaults to $Config{installman*dir}.
If set to 'none', no man pages will be installed.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to perl.
Defaults to $Config{installprivlib}.
Used by 'make install' which copies files from INST_SCRIPT to this directory.
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to site (default).
These directories get the man pages at 'make install' time if INSTALLDIRS=site (default). Defaults to $(SITEPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to vendor.
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to vendor.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to vendor.
These directories get the man pages at 'make install' time if INSTALLDIRS=vendor. Defaults to $(VENDORPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Same as INST_LIB for architecture dependent files.
Directory to put real binary files during 'make'. These will be copied to INSTALLBIN during 'make install'
Directory where we put library files of this extension while building it.
Directory to hold the man pages at 'make' time
Directory to hold the man pages at 'make' time
Directory, where executable files should be installed during 'make'. Defaults to "./blib/script", just to have a dummy location during testing. make install will copy the files in INST_SCRIPT to INSTALLSCRIPT.
Program to be used to link libraries for dynamic loading.
Defaults to $Config{ld}.
Any special flags that might need to be passed to ld to create a shared library suitable for dynamic loading. It is up to the makefile to use it. (See "lddlflags" in Config)
Defaults to $Config{lddlflags}.
Defaults to "$(OBJECT)" and is used in the ld command to specify what files to link/load from (also see dynamic_lib below for how to specify ld flags)
LIB should only be set at perl Makefile.PL time but is allowed as a MakeMaker argument. It has the effect of setting both INSTALLPRIVLIB and INSTALLSITELIB to that value regardless any explicit setting of those arguments (or of PREFIX). INSTALLARCHLIB and INSTALLSITEARCH are set to the corresponding architecture subdirectory.
The filename of the perllibrary that will be used together with this extension. Defaults to libperl.a.
An anonymous array of alternative library specifications to be searched for (in order) until at least one library is found. E.g.
'LIBS' => ["-lgdbm", "-ldbm -lfoo", "-L/path -ldbm.nfs"]
Mind, that any element of the array contains a complete set of arguments for the ld command. So do not specify
'LIBS' => ["-ltcl", "-ltk", "-lX11"]
See ODBM_File/Makefile.PL for an example, where an array is needed. If you specify a scalar as in
'LIBS' => "-ltcl -ltk -lX11"
MakeMaker will turn it into an array with one element.
'static' or 'dynamic' (default unless usedl=undef in config.sh). Should only be used to force static linking (also see linkext below).
Boolean which tells MakeMaker, that it should include the rules to make a perl. This is handled automatically as a switch by MakeMaker. The user normally does not need it.
When 'make clean' or similar is run, the $(FIRST_MAKEFILE) will be backed up at this location.
Defaults to $(FIRST_MAKEFILE).old or $(FIRST_MAKEFILE)_old on VMS.
Hashref of pod-containing files. MakeMaker will default this to all EXE_FILES files that include POD directives. The files listed here will be converted to man pages and installed as was requested at Configure time.
Hashref that assigns to *.pm and *.pod files the files into which the manpages are to be written. MakeMaker parses all *.pod and *.pm files for POD directives. Files that contain POD will be the default keys of the MAN3PODS hashref. These will then be converted to man pages during make and will be installed during make install.
If it is intended, that a new perl binary be produced, this variable may hold a name for that binary. Defaults to perl
If the extension links to a library that it builds set this to the name of the library (see SDBM_File)
Perl module name for this extension (DBD::Oracle). This will default to the directory name but should be explicitly defined in the Makefile.PL.
MakeMaker will figure out if an extension contains linkable code anywhere down the directory tree, and will set this variable accordingly, but you can speed it up a very little bit if you define this boolean variable yourself.
Command so make does not print the literal commands its running.
By setting it to an empty string you can generate a Makefile that prints all commands. Mainly used in debugging MakeMaker itself.
Defaults to @.
Boolean. Attribute to inhibit descending into subdirectories.
When true, suppresses the generation and addition to the MANIFEST of the META.yml module meta-data file during 'make distdir'.
Defaults to false.
In general, any generated Makefile checks for the current version of MakeMaker and the version the Makefile was built under. If NO_VC is set, the version check is neglected. Do not write this into your Makefile.PL, use it interactively instead.
List of object files, defaults to '$(BASEEXT)$(OBJ_EXT)', but can be a long string containing all object files, e.g. "tkpBind.o tkpButton.o tkpCanvas.o"
(Where BASEEXT is the last component of NAME, and OBJ_EXT is $Config{obj_ext}.)
Defaults to -O. Set it to -g to turn debugging on. The flag is passed to subdirectory makes.
Perl binary for tasks that can be done by miniperl
Set only when MakeMaker is building the extensions of the Perl core distribution.
The call to the program that is able to compile perlmain.c. Defaults to $(CC).
Same as for PERL_LIB, but for architecture dependent files.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_ARCHLIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
Directory containing the Perl library to use.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_LIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
defaults to 0. Should be set to TRUE if the extension can work with the memory allocation routines substituted by the Perl malloc() subsystem. This should be applicable to most extensions with exceptions of those
with bugs in memory allocations which are caught by Perl's malloc();
which interact with the memory allocator in other ways than via malloc(), realloc(), free(), calloc(), sbrk() and brk();
which rely on special alignment which is not provided by Perl's malloc().
NOTE. Negligence to set this flag in any one of loaded extension nullifies many advantages of Perl's malloc(), such as better usage of system resources, error detection, memory usage reporting, catchable failure of memory allocations, etc.
Directory under which core modules are to be installed.
Defaults to $Config{installprefixexp} falling back to $Config{installprefix}, $Config{prefixexp} or $Config{prefix} should $Config{installprefixexp} not exist.
Overridden by PREFIX.
Use this instead of $(PERL) when you wish to run perl. It will set up extra necessary flags for you.
Use this instead of $(PERL) when you wish to run perl to work with modules. It will add things like -I$(INST_ARCH) and other necessary flags so perl can see the modules you're about to install.
Directory containing the Perl source code (use of this should be avoided, it may be undefined)
Desired permission for read/writable files. Defaults to 644. See also "perm_rw" in MM_Unix.
Desired permission for executable files. Defaults to 755. See also "perm_rwx" in MM_Unix.
Ref to hash of files to be processed as perl programs. MakeMaker will default to any found *.PL file (except Makefile.PL) being keys and the basename of the file being the value. E.g.
{'foobar.PL' => 'foobar'}
The *.PL files are expected to produce output to the target files themselves. If multiple files can be generated from the same *.PL file then the value in the hash can be a reference to an array of target file names. E.g.
{'foobar.PL' => ['foobar1','foobar2']}
Hashref of .pm files and *.pl files to be installed. e.g.
{'name_of_file.pm' => '$(INST_LIBDIR)/install_as.pm'}
By default this will include *.pm and *.pl and the files found in the PMLIBDIRS directories. Defining PM in the Makefile.PL will override PMLIBDIRS.
Ref to array of subdirectories containing library files. Defaults to [ 'lib', $(BASEEXT) ]. The directories will be scanned and any files they contain will be installed in the corresponding location in the library. A libscan() method can be used to alter the behaviour. Defining PM in the Makefile.PL will override PMLIBDIRS.
(Where BASEEXT is the last component of NAME.)
A filter program, in the traditional Unix sense (input from stdin, output to stdout) that is passed on each .pm file during the build (in the pm_to_blib() phase). It is empty by default, meaning no filtering is done.
Great care is necessary when defining the command if quoting needs to be done. For instance, you would need to say:
{'PM_FILTER' => 'grep -v \\"^\\#\\"'}
to remove all the leading coments on the fly during the build. The extra \\ are necessary, unfortunately, because this variable is interpolated within the context of a Perl program built on the command line, and double quotes are what is used with the -e switch to build that command line. The # is escaped for the Makefile, since what is going to be generated will then be:
PM_FILTER = grep -v \"^\#\"
Without the \\ before the #, we'd have the start of a Makefile comment, and the macro would be incorrectly defined.
Release 5.005 grandfathered old global symbol names by providing preprocessor macros for extension source compatibility. As of release 5.6, these preprocessor definitions are not available by default. The POLLUTE flag specifies that the old names should still be defined:
perl Makefile.PL POLLUTE=1
Please inform the module author if this is necessary to successfully install a module under 5.6 or later.
Name of the executable used to run PPM_INSTALL_SCRIPT below. (e.g. perl)
Name of the script that gets executed by the Perl Package Manager after the installation of a package.
This overrides all the default install locations. Man pages, libraries, scripts, etc... MakeMaker will try to make an educated guess about where to place things under the new PREFIX based on your Config defaults. Failing that, it will fall back to a structure which should be sensible for your platform.
If you specify LIB or any INSTALL* variables they will not be effected by the PREFIX.
Bool. If this parameter is true, failing to have the required modules (or the right versions thereof) will be fatal. perl Makefile.PL will die with the proper message.
Note: see Test::Harness for a shortcut for stopping tests early if you are missing dependencies.
Do not use this parameter for simple requirements, which could be resolved at a later time, e.g. after an unsuccessful make test of your module.
It is extremely rare to have to use PREREQ_FATAL at all!
Hashref: Names of modules that need to be available to run this extension (e.g. Fcntl for SDBM_File) are the keys of the hash and the desired version is the value. If the required version number is 0, we only check if any version is installed already.
Bool. If this parameter is true, the prerequisites will be printed to stdout and MakeMaker will exit. The output format is an evalable hash ref.
$PREREQ_PM = { 'A::B' => Vers1, 'C::D' => Vers2, ... };
RedHatism for PREREQ_PRINT. The output format is different, though:
perl(A::B)>=Vers1 perl(C::D)>=Vers2 ...
Like PERLPREFIX, but only for the site install locations.
Defaults to $Config{siteprefixexp}. Perls prior to 5.6.0 didn't have an explicit siteprefix in the Config. In those cases $Config{installprefix} will be used.
Overridable by PREFIX
Arrayref. E.g. [qw(name1 name2)] skip (do not write) sections of the Makefile. Caution! Do not use the SKIP attribute for the negligible speedup. It may seriously damage the resulting Makefile. Only use it if you really need it.
Ref to array of typemap file names. Use this when the typemaps are in some directory other than the current directory or when they are not named typemap. The last typemap in the list takes precedence. A typemap in the current directory has highest precedence, even if it isn't listed in TYPEMAPS. The default system typemap has lowest precedence.
Like PERLPREFIX, but only for the vendor install locations.
Defaults to $Config{vendorprefixexp}.
Overridable by PREFIX
If true, make install will be verbose
Your version number for distributing the package. This defaults to 0.1.
Instead of specifying the VERSION in the Makefile.PL you can let MakeMaker parse a file to determine the version number. The parsing routine requires that the file named by VERSION_FROM contains one single line to compute the version number. The first line in the file that contains the regular expression
/([\$*])(([\w\:\']*)\bVERSION)\b.*\=/
will be evaluated with eval() and the value of the named variable after the eval() will be assigned to the VERSION attribute of the MakeMaker object. The following lines will be parsed o.k.:
$VERSION = '1.00';
*VERSION = \'1.01';
$VERSION = sprintf "%d.%03d", q$Revision: 1.133 $ =~ /(\d+)/g;
$FOO::VERSION = '1.10';
*FOO::VERSION = \'1.11';
our $VERSION = 1.2.3; # new for perl5.6.0
but these will fail:
my $VERSION = '1.01';
local $VERSION = '1.02';
local $FOO::VERSION = '1.30';
(Putting my or local on the preceding line will work o.k.)
The file named in VERSION_FROM is not added as a dependency to Makefile. This is not really correct, but it would be a major pain during development to have to rewrite the Makefile for any smallish change in that file. If you want to make sure that the Makefile contains the correct VERSION macro after any change of the file, you would have to do something like
depend => { Makefile => '$(VERSION_FROM)' }
See attribute depend below.
A sanitized VERSION with . replaced by _. For places where . has special meaning (some filesystems, RCS labels, etc...)
Hashref of .xs files. MakeMaker will default this. e.g.
{'name_of_file.xs' => 'name_of_file.c'}
The .c files will automatically be included in the list of files deleted by a make clean.
String of options to pass to xsubpp. This might include -C++ or -extern. Do not include typemaps here; the TYPEMAP parameter exists for that purpose.
May be set to an empty string, which is identical to -prototypes, or -noprototypes. See the xsubpp documentation for details. MakeMaker defaults to the empty string.
Your version number for the .xs file of this package. This defaults to the value of the VERSION attribute.
can be used to pass parameters to the methods which implement that part of the Makefile. Parameters are specified as a hash ref but are passed to the method as a hash.
{FILES => "*.xyz foo"}
{ANY_TARGET => ANY_DEPENDECY, ...}
(ANY_TARGET must not be given a double-colon rule by MakeMaker.)
{TARFLAGS => 'cvfF', COMPRESS => 'gzip', SUFFIX => '.gz',
SHAR => 'shar -m', DIST_CP => 'ln', ZIP => '/bin/zip',
ZIPFLAGS => '-rl', DIST_DEFAULT => 'private tardist' }
If you specify COMPRESS, then SUFFIX should also be altered, as it is needed to tell make the target file of the compression. Setting DIST_CP to ln can be useful, if you need to preserve the timestamps on your files. DIST_CP can take the values 'cp', which copies the file, 'ln', which links the file, and 'best' which copies symbolic links and links the rest. Default is 'best'.
{ARMAYBE => 'ar', OTHERLDFLAGS => '...', INST_DYNAMIC_DEP => '...'}
{LINKTYPE => 'static', 'dynamic' or ''}
NB: Extensions that have nothing but *.pm files had to say
{LINKTYPE => ''}
with Pre-5.0 MakeMakers. Since version 5.00 of MakeMaker such a line can be deleted safely. MakeMaker recognizes when there's nothing to be linked.
{ANY_MACRO => ANY_VALUE, ...}
Anything put here will be passed to MY::postamble() if you have one.
{FILES => '$(INST_ARCHAUTODIR)/*.xyz'}
{TESTS => 't/*.t'}
{MAXLEN => 8}
If you cannot achieve the desired Makefile behaviour by specifying attributes you may define private subroutines in the Makefile.PL. Each subroutine returns the text it wishes to have written to the Makefile. To override a section of the Makefile you can either say:
sub MY::c_o { "new literal text" }
or you can edit the default by saying something like:
package MY; # so that "SUPER" works right
sub c_o {
my $inherited = shift->SUPER::c_o(@_);
$inherited =~ s/old text/new text/;
$inherited;
}
If you are running experiments with embedding perl as a library into other applications, you might find MakeMaker is not sufficient. You'd better have a look at ExtUtils::Embed which is a collection of utilities for embedding.
If you still need a different solution, try to develop another subroutine that fits your needs and submit the diffs to makemaker@perl.org
For a complete description of all MakeMaker methods see ExtUtils::MM_Unix.
Here is a simple example of how to add a new target to the generated Makefile:
sub MY::postamble {
return <<'MAKE_FRAG';
$(MYEXTLIB): sdbm/Makefile
cd sdbm && $(MAKE) all
MAKE_FRAG
}
WriteMakefile() now does some basic sanity checks on its parameters to protect against typos and malformatted values. This means some things which happened to work in the past will now throw warnings and possibly produce internal errors.
Some of the most common mistakes:
<MAN3PODS = ' '>>
This is commonly used to supress the creation of man pages. MAN3PODS takes a hash ref not a string, but the above worked by accident in old versions of MakeMaker.
The correct code is <MAN3PODS = { }>>.
MakeMaker.pm uses the architecture specific information from Config.pm. In addition it evaluates architecture specific hints files in a hints/ directory. The hints files are expected to be named like their counterparts in PERL_SRC/hints, but with an .pl file name extension (eg. next_3_2.pl). They are simply evaled by MakeMaker within the WriteMakefile() subroutine, and can be used to execute commands as well as to include special variables. The rules which hintsfile is chosen are the same as in Configure.
The hintsfile is eval()ed immediately after the arguments given to WriteMakefile are stuffed into a hash reference $self but before this reference becomes blessed. So if you want to do the equivalent to override or create an attribute you would say something like
$self->{LIBS} = ['-ldbm -lucb -lc'];
For authors of extensions MakeMaker provides several Makefile targets. Most of the support comes from the ExtUtils::Manifest module, where additional documentation can be found.
reports which files are below the build directory but not in the MANIFEST file and vice versa. (See ExtUtils::Manifest::fullcheck() for details)
reports which files are skipped due to the entries in the MANIFEST.SKIP file (See ExtUtils::Manifest::skipcheck() for details)
does a realclean first and then the distcheck. Note that this is not needed to build a new distribution as long as you are sure that the MANIFEST file is ok.
rewrites the MANIFEST file, adding all remaining files found (See ExtUtils::Manifest::mkmanifest() for details)
Copies all the files that are in the MANIFEST file to a newly created directory with the name $(DISTNAME)-$(VERSION). If that directory exists, it will be removed first.
Additionally, it will create a META.yml module meta-data file and add this to your MANFIEST. You can shut this behavior off with the NO_META flag.
Makes a distdir first, and runs a perl Makefile.PL, a make, and a make test in that directory.
First does a distdir. Then a command $(PREOP) which defaults to a null command, followed by $(TOUNIX), which defaults to a null command under UNIX, and will convert files in distribution directory to UNIX format otherwise. Next it runs tar on that directory into a tarfile and deletes the directory. Finishes with a command $(POSTOP) which defaults to a null command.
Defaults to $(DIST_DEFAULT) which in turn defaults to tardist.
Runs a tardist first and uuencodes the tarfile.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Next it runs shar on that directory into a sharfile and deletes the intermediate directory again. Finishes with a command $(POSTOP) which defaults to a null command. Note: For shdist to work properly a shar program that can handle directories is mandatory.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Runs $(ZIP) $(ZIPFLAGS) on that directory into a zipfile. Then deletes that directory. Finishes with a command $(POSTOP) which defaults to a null command.
Does a $(CI) and a $(RCS_LABEL) on all files in the MANIFEST file.
Customization of the dist targets can be done by specifying a hash reference to the dist attribute of the WriteMakefile call. The following parameters are recognized:
CI ('ci -u')
COMPRESS ('gzip --best')
POSTOP ('@ :')
PREOP ('@ :')
TO_UNIX (depends on the system)
RCS_LABEL ('rcs -q -Nv$(VERSION_SYM):')
SHAR ('shar')
SUFFIX ('.gz')
TAR ('tar')
TARFLAGS ('cvf')
ZIP ('zip')
ZIPFLAGS ('-r')
An example:
WriteMakefile( 'dist' => { COMPRESS=>"bzip2", SUFFIX=>".bz2" })
Long plaguing users of MakeMaker based modules has been the problem of getting basic information about the module out of the sources without running the Makefile.PL and doing a bunch of messy heuristics on the resulting Makefile. To this end a simple module meta-data file has been introduced, META.yml.
META.yml is a YAML document (see http://www.yaml.org) containing basic information about the module (name, version, prerequisites...) in an easy to read format. The format is developed and defined by the Module::Build developers (see http://module-build.sourceforge.net/META-spec.html)
MakeMaker will automatically generate a META.yml file for you and add it to your MANIFEST as part of the 'distdir' target (and thus the 'dist' target). This is intended to seamlessly and rapidly populate CPAN with module meta-data. If you wish to shut this feature off, set the NO_META WriteMakefile() flag to true.
If some events detected in Makefile.PL imply that there is no way to create the Module, but this is a normal state of things, then you can create a Makefile which does nothing, but succeeds on all the "usual" build targets. To do so, use
ExtUtils::MakeMaker::WriteEmptyMakefile();
instead of WriteMakefile().
This may be useful if other modules expect this module to be built OK, as opposed to work OK (say, this system-dependent module builds in a subdirectory of some other distribution, or is listed as a dependency in a CPAN::Bundle, but the functionality is supported by different means on the current architecture).
my $value = prompt($message);
my $value = prompt($message, $default);
The prompt() function provides an easy way to request user input used to write a makefile. It displays the $message as a prompt for input. If a $default is provided it will be used as a default. The function returns the $value selected by the user.
If prompt() detects that it is not running interactively and there is nothing on STDIN or if the PERL_MM_USE_DEFAULT environment variable is set to true, the $default will be used without prompting. This prevents automated processes from blocking on user input.
If no $default is provided an empty string will be used instead.
Command line options used by MakeMaker->new(), and thus by WriteMakefile(). The string is split on whitespace, and the result is processed before any actual command line arguments are processed.
If set to a true value then MakeMaker's prompt function will always return the default without waiting for user input.
ExtUtils::MM_Unix, ExtUtils::Manifest ExtUtils::Install, ExtUtils::Embed
Andy Dougherty <doughera@lafayette.edu>, Andreas König <andreas.koenig@mind.de>, Tim Bunce <timb@cpan.org>. VMS support by Charles Bailey <bailey@newman.upenn.edu>. OS/2 support by Ilya Zakharevich <ilya@math.ohio-state.edu>.
Currently maintained by Michael G Schwern <schwern@pobox.com>
Send patches and ideas to <makemaker@perl.org>.
Send bug reports via http://rt.cpan.org/. Please send your generated Makefile along with your report.
For more up-to-date information, see http://www.makemaker.org.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See http://www.perl.com/perl/misc/Artistic.html
|
Konvertides MyBB to phpBB tuleb selline veateade?
Kood: Vali kõik
General Error
SQL ERROR [ mysqli ]
Unknown column 'users.yahoo' in 'field list' [1054]
SQL
SELECT users.uid, users.website, users.yahoo, users.aim, users.icq, users.skype, userfields.fid1 FROM mybb_users users LEFT JOIN mybb_userfields AS userfields ON (users.uid = userfields.ufid) ORDER BY users.uid LIMIT 2000
BACKTRACE
FILE: (not given by php)
LINE: (not given by php)
CALL: msg_handler()
FILE: [ROOT]/phpbb/db/driver/driver.php
LINE: 855
CALL: trigger_error()
FILE: [ROOT]/phpbb/db/driver/mysqli.php
LINE: 194
CALL: phpbb\db\driver\driver->sql_error()
FILE: [ROOT]/phpbb/db/driver/mysql_base.php
LINE: 45
CALL: phpbb\db\driver\mysqli->sql_query()
FILE: [ROOT]/phpbb/db/driver/driver.php
LINE: 261
CALL: phpbb\db\driver\mysql_base->_sql_query_limit()
FILE: [ROOT]/install/install_convert.php
LINE: 1263
CALL: phpbb\db\driver\driver->sql_query_limit()
FILE: [ROOT]/install/install_convert.php
LINE: 214
CALL: install_convert->convert_data()
FILE: [ROOT]/install/index.php
LINE: 409
CALL: install_convert->main()
FILE: [ROOT]/install/index.php
LINE: 289
CALL: module->load()
|
Great work so far! Our MinHeap adds elements to the internal list, keeps a running count, and has the beginnings of .heapify_up().
Before we dive into the logic for .heapify_up(), let’s review how heaps track elements. We use a list for storing internal elements, but we’re modeling it on a binary tree, where every “parent” element has up to two “child” elements.
“child” and “parent” elements are determined by their relative indices within the internal list. By doing some arithmetic on an element’s index, we can determine the indices for parent and child elements (if they exist).
Parent: index // 2
Left Child: index * 2
Right Child: (index * 2) + 1
print(heap.heap_list)
# [None, 10, 13, 21, 61, 22, 23, 99]
# Indices: [0, 1, 2, 3, 4, 5, 6, 7]
heap.parent_idx(4)
# (4 // 2) == 2
# Element at index 4 is 61
# Element at index 2 is 13
# The parent element of 61 is 13
heap.left_child(3)
# (3 * 2) == 6
# Element at index 3 is 21
# Element at index 6 is 23
# The left child element of 21 is 23
These calculations are important for the efficiency of the heap, but they’re not necessary to memorize, so we’ve added three helper methods: .parent_idx(), .left_child_idx(), and .right_child_idx().
These helpers take an index as the argument and return the corresponding parent or child index.
Instructions
1.
Fill in the code in script.py to test out these new helper methods.
|
Семплинг поперек или как выжать из датасета еще несколько тысячных
Recovery mode
Эта статья про картинки и классификацию. Небольшое исследование свойств, такой вот штрих к портрету MNIST (ну и подсказка в решении других подобных задач).
В сети есть множество публикаций об интерпретации той или иной нейронной сети и значимости и вкладе тех или иных точек в обучение. Есть масса работ про поиск усов, хвостов и других частей и их важности и значимости. Не буду сейчас подменять библиотекарей и составлять список. Просто расскажу о своем эксперименте.
Началось всё с отличного видео
Или другой вопрос: — в сети множество статей о том, как изменив одну точку картинки можно существенно исказить предсказание сети. Напомню, что в статье рассматриваем только задачи классификации. А насколько такая коварная точка уникальна? А есть ли такие точки в естественной последовательности MNIST и если их найти и выкинуть станет ли точность обучения нейронной сети выше?
Автор, следуя своему традиционному методу избавления от всего лишнего, решил не мешать кучу и выбрал простой, надежный и эффективный способ исследования поставленных вопросов:
в качестве экспериментальной задачи, примера для препарирования, выбрать знакомый всем MNIST ( yann.lecun.com/exdb/mnist ) и его классификацию.
В качестве подопытной сети выбрал классическую, рекомендуемую начинающим, примерную сеть команды KERAS
github.com/keras-team/keras/blob/master/examples/mnist_cnn.py
А само исследование решил провести очень просто.
Обучим сеть из KERAS с таким критерием останова, как отсутствие повышения точности на тестовой последовательности, т.е. учим сеть до тех пор, пока test_accuracy не станет существенно больше validation_accuracy и validation_accuracy в течении 15 эпох не улучшается. Другими словами сеть перестала обучаться и началось переобучение.
Из датасета MNIST сделаем 324 новых датасета путем отбрасывания групп точек и будем учить той же сетью на точно тех же условиях с теми же начальными весами.
Приступим, считаю правильным и верным выкладывать весь код, от первой до последней строчки. Даже если читатели видели его, очевидно, много раз.
Загружаем библиотеки и загружаем датасет mnist, если он еще не загружен.
Далее переводим его в формат 'float32' и нормируем в диапазон 0. — 1.
Подготовка закончена.
Запомним в переменных название файлов модели и весов, также accuracy и loss нашей сети. Этого нет в исходном коде, но для эксперимента необходимо.
Сама сеть — точь в точь как на сайте
github.com/keras-team/keras/blob/master/examples/mnist_cnn.py.
Сохраняем сеть и весы на диск. Все наши попытки обучения будем запускать с одинаковыми начальными весами:
Запустим обучение на исходном mnist, что бы получить ориентир, базовую эффективность.
Теперь начнем основной эксперимент. Мы берем для обучения из исходной поледовательности все 60000 размеченных картинок, и в них обнуляем всё, кроме квадрата 9х9. Получим 324 экспериментальных поледовательности и сравним результат обучения сети на них с обучением на оригинальной поледовательности. Обучаем ту же сеть с теми же начальными весами.
Нет смысла выкладывать тут все 324 результата, если кому интересно, то могу выслать персонально. Расчет занимает несколько суток, это если кто хочет повторить.
Как оказалось сеть на обрезке 9х9 может обучаться как хуже, что очевидно, но и лучше, что совсем не очевидно.
Например:
i= 0 j= 14
accuracy 0.9972333312 loss 0.0017946947 step 450 val_accu 0.9922000170 val_loss 0.0054322388
i= 18 j= 1
accuracy 0.9973166585 loss 0.0019487827 step 415 val_accu 0.9922000170 val_loss 0.0053000450
Мы выбрасываем из картинок с рукописными цифрами всё, кроме квадрата 9х9 и качество обучения и распознавания у нас улучшается!
Так же явно видно, что такая особая область, повышающая качество сети, не одна. И не две, это две приведены как пример.
Итог этого эксперимента и предварительные выводы.
Спасибо за внимание.
В сети есть множество публикаций об интерпретации той или иной нейронной сети и значимости и вкладе тех или иных точек в обучение. Есть масса работ про поиск усов, хвостов и других частей и их важности и значимости. Не буду сейчас подменять библиотекарей и составлять список. Просто расскажу о своем эксперименте.
Началось всё с отличного видео
Доклад «Как думают роботы. Интерпретация ML-моделей», просмотренного по совету одного умного человека и как любое толковое дело, поставило множество вопросов. Например: — насколько ключевые точки датасета уникальны?
Или другой вопрос: — в сети множество статей о том, как изменив одну точку картинки можно существенно исказить предсказание сети. Напомню, что в статье рассматриваем только задачи классификации. А насколько такая коварная точка уникальна? А есть ли такие точки в естественной последовательности MNIST и если их найти и выкинуть станет ли точность обучения нейронной сети выше?
Автор, следуя своему традиционному методу избавления от всего лишнего, решил не мешать кучу и выбрал простой, надежный и эффективный способ исследования поставленных вопросов:
в качестве экспериментальной задачи, примера для препарирования, выбрать знакомый всем MNIST ( yann.lecun.com/exdb/mnist ) и его классификацию.
В качестве подопытной сети выбрал классическую, рекомендуемую начинающим, примерную сеть команды KERAS
github.com/keras-team/keras/blob/master/examples/mnist_cnn.py
А само исследование решил провести очень просто.
Обучим сеть из KERAS с таким критерием останова, как отсутствие повышения точности на тестовой последовательности, т.е. учим сеть до тех пор, пока test_accuracy не станет существенно больше validation_accuracy и validation_accuracy в течении 15 эпох не улучшается. Другими словами сеть перестала обучаться и началось переобучение.
Из датасета MNIST сделаем 324 новых датасета путем отбрасывания групп точек и будем учить той же сетью на точно тех же условиях с теми же начальными весами.
Приступим, считаю правильным и верным выкладывать весь код, от первой до последней строчки. Даже если читатели видели его, очевидно, много раз.
Загружаем библиотеки и загружаем датасет mnist, если он еще не загружен.
Далее переводим его в формат 'float32' и нормируем в диапазон 0. — 1.
Подготовка закончена.
'''Trains a simple convnet on the MNIST dataset.
Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
'''
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
from keras.optimizers import *
from keras.callbacks import EarlyStopping
import numpy as np
import os
num_classes = 10
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= np.max(x_train)
x_test /= np.max(x_test)
XX_test = np.copy(x_test)
XX_train = np.copy(x_train)
YY_test = np.copy(y_test)
YY_train = np.copy(y_train)
print('x_train shape:', XX_train.shape)
print('x_test shape:', XX_test.shape)
Запомним в переменных название файлов модели и весов, также accuracy и loss нашей сети. Этого нет в исходном коде, но для эксперимента необходимо.
f_model = "./data/mnist_cnn_model.h5"
f_weights = "./data/mnist_cnn_weights.h5"
accu_f = 'accuracy'
loss_f = 'binary_crossentropy'
Сама сеть — точь в точь как на сайте
github.com/keras-team/keras/blob/master/examples/mnist_cnn.py.
Сохраняем сеть и весы на диск. Все наши попытки обучения будем запускать с одинаковыми начальными весами:
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=[loss_f], optimizer=Adam(lr=1e-4), metrics=[accu_f])
model.summary()
model.save_weights(f_weights)
model.save(f_model)
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 32) 320
_________________________________________________________________
conv2d_1 (Conv2D) (None, 24, 24, 64) 18496
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 12, 12, 64) 0
_________________________________________________________________
dropout (Dropout) (None, 12, 12, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 9216) 0
_________________________________________________________________
dense (Dense) (None, 128) 1179776
_________________________________________________________________
dropout_1 (Dropout) (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 10) 1290
=================================================================
Total params: 1,199,882
Trainable params: 1,199,882
Non-trainable params: 0
_________________________________________________________________
Запустим обучение на исходном mnist, что бы получить ориентир, базовую эффективность.
x_test = np.copy(XX_test)
x_train = np.copy(XX_train)
s0 = 0
batch_size = 100
max_accu = 0.
if os.path.isfile(f_model):
model = load_model(f_model)
model.load_weights(f_weights, by_name=False)
step = 0
while True:
fit = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=1,
verbose=0,
validation_data=(x_test, y_test)
)
current_accu = fit.history[accu_f][0]
current_loss = fit.history['loss'][0]
val_accu = fit.history['val_'+accu_f][0]
val_loss = fit.history['val_loss'][0]
print("\x1b[2K","accuracy {0:12.10f} loss {1:12.10f} step {2:5d} val_accu {3:12.10f} val_loss {4:12.10f} ".\
format(current_accu, current_loss, step, val_accu, val_loss), end="\r")
step += 1
if val_accu > max_accu:
s0 = 0
max_accu = val_accu
else:
s0 += 1
if current_accu * 0.995 > val_accu and s0 > 15:
break
else:
print("model not found ")
accuracy 0.9967333078 loss 0.0019656278 step 405 val_accu 0.9916999936 val_loss 0.0054226643
Теперь начнем основной эксперимент. Мы берем для обучения из исходной поледовательности все 60000 размеченных картинок, и в них обнуляем всё, кроме квадрата 9х9. Получим 324 экспериментальных поледовательности и сравним результат обучения сети на них с обучением на оригинальной поледовательности. Обучаем ту же сеть с теми же начальными весами.
batch_size = 5000
s0 = 0
max_accu = 0.
for i in range(28 - 9):
for j in range(28 - 9):
print("\ni= ", i, " j= ",j)
x_test = np.copy(XX_test)
x_train = np.copy(XX_train)
x_train[:,:i,:j,:] = 0.
x_test [:,:i,:j,:] = 0.
x_train[:,i+9:,j+9:,:] = 0.
x_test [:,i+9:,j+9:,:] = 0.
if os.path.isfile(f_model):
model = load_model(f_model)
model.load_weights(f_weights, by_name=False)
else:
print("model not found ")
break
step = 0
while True:
fit = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=1,
verbose=0,
validation_data=(x_test, y_test)
)
current_accu = fit.history[accu_f][0]
current_loss = fit.history['loss'][0]
val_accu = fit.history['val_'+accu_f][0]
val_loss = fit.history['val_loss'][0]
print("\x1b[2K","accuracy {0:12.10f} loss {1:12.10f} step {2:5d} val_accu {3:12.10f} val_loss {4:12.10f} ".\
format(current_accu, current_loss, step, val_accu, val_loss), end="\r")
step += 1
if val_accu > max_accu:
s0 = 0
max_accu = val_accu
else:
s0 += 1
if current_accu * 0.995 > val_accu and s0 > 15:
break
Нет смысла выкладывать тут все 324 результата, если кому интересно, то могу выслать персонально. Расчет занимает несколько суток, это если кто хочет повторить.
Как оказалось сеть на обрезке 9х9 может обучаться как хуже, что очевидно, но и лучше, что совсем не очевидно.
Например:
i= 0 j= 14
accuracy 0.9972333312 loss 0.0017946947 step 450 val_accu 0.9922000170 val_loss 0.0054322388
i= 18 j= 1
accuracy 0.9973166585 loss 0.0019487827 step 415 val_accu 0.9922000170 val_loss 0.0053000450
Мы выбрасываем из картинок с рукописными цифрами всё, кроме квадрата 9х9 и качество обучения и распознавания у нас улучшается!
Так же явно видно, что такая особая область, повышающая качество сети, не одна. И не две, это две приведены как пример.
Итог этого эксперимента и предварительные выводы.
Любой естественный датасет, не думаю, что ЛеКун специально искажал что-то, содержит не только точки существенные для обучения, но и точки мешающие обучению. Задача поиска «вредных» точек становится актуальной, они есть, даже если их не видно.
Можно проводить стекинг и блендинг не только вдоль датасета, выбирая картинки группами, но и поперек, выбирая области картинок для разбиения и далее как обычно. Такой подход в данном случае повышает качество обучения и есть надежда, что в аналогичной задаче применение такого стекинга поперек позволит прибавить в качестве. А на том же kaggle.com несколько десятитысячных иногда (почти всегда ) позволяют существенно поднять свой авторитет и рейтинг.
Спасибо за внимание.
|
Specify the keyword args linestyle and/or marker in your call to plot.
For example, using a dashed line and blue circle markers:
plt.plot(range(10), linestyle='--', marker='o', color='b')
A shortcut call for the same thing:
plt.plot(range(10), '--bo')
Here is a list of the possible line and marker styles:
================ ===============================
character description
================ ===============================
- solid line style
-- dashed line style
-. dash-dot line style
: dotted line style
. point marker
, pixel marker
o circle marker
v triangle_down marker
^ triangle_up marker
< triangle_left marker
> triangle_right marker
1 tri_down marker
2 tri_up marker
3 tri_left marker
4 tri_right marker
s square marker
p pentagon marker
* star marker
h hexagon1 marker
H hexagon2 marker
+ plus marker
x x marker
D diamond marker
d thin_diamond marker
| vline marker
_ hline marker
================ ===============================
edit: with an example of marking an arbitrary subset of points, as requested in the comments:
import numpy as np
import matplotlib.pyplot as plt
xs = np.linspace(-np.pi, np.pi, 30)
ys = np.sin(xs)
markers_on = [12, 17, 18, 19]
plt.plot(xs, ys, '-gD', markevery=markers_on)
plt.show()
This last example using the markevery kwarg is possible in since 1.4+, due to the merge of this feature branch. If you are stuck on an older version of matplotlib, you can still achieve the result by overlaying a scatterplot on the line plot. See the edit history for more details.
|
Konvertides MyBB to phpBB tuleb selline veateade?
Kood: Vali kõik
General Error
SQL ERROR [ mysqli ]
Unknown column 'users.yahoo' in 'field list' [1054]
SQL
SELECT users.uid, users.website, users.yahoo, users.aim, users.icq, users.skype, userfields.fid1 FROM mybb_users users LEFT JOIN mybb_userfields AS userfields ON (users.uid = userfields.ufid) ORDER BY users.uid LIMIT 2000
BACKTRACE
FILE: (not given by php)
LINE: (not given by php)
CALL: msg_handler()
FILE: [ROOT]/phpbb/db/driver/driver.php
LINE: 855
CALL: trigger_error()
FILE: [ROOT]/phpbb/db/driver/mysqli.php
LINE: 194
CALL: phpbb\db\driver\driver->sql_error()
FILE: [ROOT]/phpbb/db/driver/mysql_base.php
LINE: 45
CALL: phpbb\db\driver\mysqli->sql_query()
FILE: [ROOT]/phpbb/db/driver/driver.php
LINE: 261
CALL: phpbb\db\driver\mysql_base->_sql_query_limit()
FILE: [ROOT]/install/install_convert.php
LINE: 1263
CALL: phpbb\db\driver\driver->sql_query_limit()
FILE: [ROOT]/install/install_convert.php
LINE: 214
CALL: install_convert->convert_data()
FILE: [ROOT]/install/index.php
LINE: 409
CALL: install_convert->main()
FILE: [ROOT]/install/index.php
LINE: 289
CALL: module->load()
|
åé¡
threadingã使ç¨ãã¦0ãã50ãè¶³ãããã°ã©ã ã2ã¤ç«ã¦ã¦ãã®åè¨å¤ãç£è¦ãã颿°ã§100%ã«ãªãã¾ã§ç£è¦ãã¦ãã¾ããforæã§ããããã®ã¹ã¬ããã«join()ãä»ä¸ãã¦ãããã1~50è¶³ãããã¾ã§æ¬¡ã®ã¹ã¬ããã«å¦çãç§»ããåæå¦çãè¡ã£ã¦ãã¾ããä»ã¯ã¹ã¬ããã使ããrange(2)ã¨ï¼ã¤ããç«ã¦ã¦ãªãã®ã§å¥ã
ã§ä½æãã¦start()ãå
ã«å®è¡ãã¦ããjoin()ãä»ä¸ããæ¹æ³ãããã¾ãããrange(100)ã«ãªã£ãå ´åã§ã対å¿ã§ããã³ã¼ããæ¸ãã«ã¯ã©ã®ããã«ãããè¯ãã§ããããï¼
import time
import threading
import sys
class AddingNumber():
def __init__(self):
self.sum = 0
self.flag = True
def adding(self, number):
for i in range(number):
time.sleep(.1)
self.sum += 1
def progress(self):
while self.flag:
sys.stdout.write(' {percent}%\r'.format(percent=self.sum))
calculation = AddingNumber()
t2 = threading.Thread(target=calculation.progress)
t2.setDaemon(True)
t2.start()
for i in range(2):
t = threading.Thread(target=calculation.adding, args=[50])
t.start()
t.join()
calculation.flag = False
print('{percent}%\r'.format(percent=calculation.sum))
|
Description
Use FindAll to search for all objects that have the specified values of the specified properties. The search is performed in the object hierarchy displayed in the Object Browser panel starting from the testedObj object and continuing down the hierarchy to the specified depth. The returned collection may include the testObj object and its child, grandchild or great grandchild objects that match the search conditions.
The FindAll method is similar to FindAllChildren. The difference between them is that FindAllChildren only searches in child objects, while FindAll also searches in the testedObj object.
Declaration
TestObj.FindAll(PropNames, PropValues, Depth, RefreshTree)
TestObj A variable, parameter or expression that specifies a reference to one of the objects listed in the Applies To section
PropNames [in] Required Variant
PropValues [in] Required Variant
Depth [in] Optional Integer Default value: 1
RefreshTree [in] Optional Boolean Default value: True
Result An array of the tested objects.
Applies To
All processes, windows, controls and onscreen objects.
View Mode
To view this method in the Object Browser panel and in other panels and dialogs, activate the Advanced view mode.
Parameters
The method has the following parameters:
PropNames
A property or an array of properties by which the method will search for an object.
You can view the list of object properties and their values in the Object Browser. See Exploring Object Properties and Methods in the Object Browser.
For web applications and hybrid mobile applications, you can also specify names of native web attributes (since TestComplete treats native web attributes as object properties). See Accessing Native Web Attributes and Methods for details.
PropValues
A value of a single property or an array of values of properties that the PropNames parameter specifies.
Values can contain asterisk (*) or question mark (?) wildcards, or regular expressions. The asterisk (*) wildcard corresponds to a string of any length (including an empty string), the question mark corresponds to any single character (including none). To specify more complicated parts of the value, use regular expressions. For information on them, see the Remarks section below.
Values can be case-sensitive or case-insensitive depending on the Use case-sensitive parameters setting of your current project. Regular expression patterns are always case-insensitive.
You can view the list of object properties and their values in the Object Browser. See Exploring Object Properties and Methods in the Object Browser.
Depth
A positive number (greater than 0) that specifies the level of objects where FindAll will search for the desired objects. By default, Depth is 1 and means that the method will search in the testObj object and its child objects. If Depth is 2, FindAll will search in testObj, child and grandchild objects. If Depth is 3, FindAll will search in testObj, child, grandchild and great grandchild objects, and so on. To search in the whole testObj hierarchy, use a Depth value that is greater than the number of child levels in the hierarchy, for example, 20000.
RefreshTree
TestComplete performs the search in the cached copy of the object hierarchy, which may not correspond to the actual hierarchy of objects in the tested application. This may happen, for instance, if the actions that precede the search caused changes in the application state. The RefreshTree parameter lets you specify what TestComplete should do if no objects matching the search criteria were found in the cached object tree. If it is True (default), TestComplete will refresh the cached object tree and perform the search once again. If it is False, TestComplete will not refresh the object tree and will return an empty array indicating that no objects were found.
Result Value
An array of the objects that have the specified values of the specified properties. The returned collection includes the testObj object if it matches the search conditions.
Note for JScript, C#Script and C++Script users: The array returned by the FindAll method is in the safe array format, which is not compatible with standard JScript arrays. To use such an array in JScript, C#Script or C++Script code, you need to convert it to the native format using the toArray method of the variant array (see the examples below).
Remarks
We do not recommend that you use the
VisibleOnScreenproperty in a search condition of the method. It may take much time for TestComplete to get the value of this property, and using it for searching for objects may decrease the test performance significantly.
When the
FindAllmethod is used to search for objects by name (theNameproperty), TestComplete ignores spaces and the following characters in the name:
( ) [ ] . , " '
This behavior is intended to eliminate differences between the object name syntax in different scripting languages during the search.
In general, it is not recommended to use the
Nameproperty withFindAll; consider using other properties instead. For example,Nameis a complex value that is composed of other properties, such asWndClassorWndCaption, so you can search by a combination of these individual properties.
Regular expressions should start with "
regexp:", for example:
obj = parent.Find("PropName", "
regexp:gr[ae]y", 5)
Regular expression patterns use the standard TestComplete syntax, but have the following specifics:
All patterns are case-insensitive. For example,
"regexp:gr[ae]y"will match both "gray" and "GRAY".
Patterns search for partial matches. For example,
regexp:notepadmatches both "notepad" and "notepad++". To search for an exact match, use the ^ and $ anchors, for example "regexp:^notepad$".
Native regular expressions of the scripting languages are not supported.
Example
The following example searches for all buttons of the Edit class in the Notepad’s Font dialog (Notepad must be running). The example uses a single search condition: WndClass equals to “Edit”. The single values are passed to the PropNames and PropValues parameters:
JavaScript
function FindEditbuttons()
{
var p, w, textBoxes;
// Obtain the Notepad process
p = Sys.Process("notepad");
// Open the Font dialog
p.Window("Notepad", "*").MainMenu.Click("Format|Font...");
w = p.Window("#32770", "Font");
// Search for all edit buttons in the Font dialog
textBoxes = w.FindAll("WndClass", "Edit", 5);
// Log the search results
if (textBoxes.length > 0)
{
for (let i = 0; i < textBoxes.length; i++)
Log.Message("FullName: " + textBoxes[i].FullName + "\r\n" +
"Text: " + textBoxes[i].wText);
Log.Message("Total number of found edit buttons: " + textBoxes.length);
}
else
Log.Warning("No edit buttons found.");
}
JScript
function FindEditbuttons()
{
var p, w, textBoxes, i;
// Obtain the Notepad process
p = Sys.Process("notepad");
// Open the Font dialog
p.Window("Notepad", "*").MainMenu.Click("Format|Font...");
w = p.Window("#32770", "Font");
// Search for all edit buttons in the Font dialog
textBoxes = w.FindAll("WndClass", "Edit", 5).toArray();
// Log the search results
if (textBoxes.length > 0)
{
for (i = 0; i < textBoxes.length; i++)
Log.Message("FullName: " + textBoxes[i].FullName + "\r\n" +
"Text: " + textBoxes[i].wText);
Log.Message("Total number of found edit buttons: " + textBoxes.length);
}
else
Log.Warning("No edit buttons found.");
}
Python
def FindEditbuttons():
# Obtain the Notepad process
p = Sys.Process("notepad")
# Open the Font dialog
p.Window("Notepad", "*").MainMenu.Click("Format|Font...")
w = p.Window("#32770", "Font")
# Search for all edit buttons in the Font dialog
textBoxes = w.FindAllChildren("WndClass", "Edit", 5)
# Log the search results
if (len(textBoxes) > 0):
for i in range (0, len(textBoxes)):
Log.Message("FullName: " + textBoxes[i].FullName + "\r\n" +\
"Text: " + textBoxes[i].wText)
Log.Message("Total number of found edit buttons: " + VarToStr(len(textBoxes)))
else:
Log.Warning("No edit buttons found.")
VBScript
Sub FindEditbuttons
Dim p, w, textBoxes, i
' Obtain the Notepad process
Set p = Sys.Process("notepad")
' Open the Font dialog
p.Window("Notepad", "*").MainMenu.Click("Format|Font...")
Set w = p.Window("#32770", "Font")
' Find all edit buttons in the Font dialog
textBoxes = w.FindAll("WndClass", "Edit", 5)
' Log the search results
If UBound(textBoxes) >= 0 Then
For i = 0 To UBound(textBoxes)
Log.Message("FullName: " & textBoxes(i).FullName & vbNewLine & _
"Text: " & textBoxes(i).wText)
Next
Log.Message("Total number of found edit buttons: " & (UBound(textBoxes) + 1))
Else
Log.Warning("No edit buttons found.")
End IfEnd Sub
DelphiScript
procedure FindEditbuttons;var p, w, textBoxes, i;begin
// Obtain the Notepad process
p := Sys.Process('notepad');
// Open the Font dialog
p.Window('Notepad', '*').MainMenu.Click('Format|Font...');
w := p.Window('#32770', 'Font');
// Find all edit buttons in the Font dialog
textBoxes := w.FindAll('WndClass', 'Edit', 5);
// Log the search results
if VarArrayHighBound(textBoxes, 1) >= 0 then
begin
for i := 0 to VarArrayHighBound(textBoxes, 1) do
Log.Message('FullName: ' + textBoxes[i].FullName + #13#10 +
'Text: ' + textBoxes[i].wText);
Log.Message('Total number of found edit buttons: ' + aqConvert.VarToStr(VarArrayHighBound(textBoxes, 1) + 1));
end
else
Log.Warning('No edit buttons found.');end;
C++Script, C#Script
function FindEditbuttons()
{
var p, w, textBoxes, i;
// Obtain the Notepad process
p = Sys["Process"]("notepad");
// Open the Font dialog
p["Window"]("Notepad", "*")["MainMenu"]["Click"]("Format|Font...");
w = p["Window"]("#32770", "Font");
// Obtain all edit buttons in the Font dialog
textBoxes = w["FindAll"]("WndClass", "Edit", 5)["toArray"]();
// Log the search results
if (textBoxes["length"] > 0)
{
for (i = 0; i < textBoxes["length"]; i++)
Log["Message"]("FullName: " + textBoxes[i]["FullName"] + "\r\n" +
"Text: " + textBoxes[i]["wText"]);
Log["Message"]("Total number of found edit buttons: " + textBoxes["length"]);
}
else
Log["Warning"]("No edit buttons found.");
}
Below is a more complex example. It searches for all enabled buttons in Notepad’s Replace dialog (Notepad must be running). This example uses a multiple search condition: WndClass equals the “Button” and Enabled is equal to True. The PropNames and PropValues parameters of the FindAll method receive variant arrays containing the sought-for property names and values.
JavaScript
function FindEnabledButtons()
{
var p, w, PropArray, ValuesArray, buttons;
// Obtain the Notepad process
p = Sys.Process("notepad");
// Open the Replace dialog
p.Window("Notepad", "*").MainMenu.Click("Edit|Replace...");
w = p.Window("#32770", "Replace");
// Specify the sought-for property names
PropArray = new Array ("WndClass", "Enabled");
// Specify the sought-for property values
ValuesArray = new Array ("Button", true);
// Find all enabled buttons in the Replace dialog
buttons = w.FindAll(PropArray, ValuesArray, 5);
// Log the search results
if (buttons.length > 0)
{
for (let i = 0; i < buttons.length; i++)
Log.Message(buttons[i].FullName);
Log.Message("Total number of found enabled buttons: " + buttons.length)
}
else
Log.Warning("No enabled buttons were found.");
}
JScript
function FindEnabledButtons()
{
var p, w, PropArray, ValuesArray, buttons, i;
// Obtain the Notepad process
p = Sys.Process("notepad");
// Open the Replace dialog
p.Window("Notepad", "*").MainMenu.Click("Edit|Replace...");
w = p.Window("#32770", "Replace");
// Specify the sought-for property names
PropArray = new Array ("WndClass", "Enabled");
// Specify the sought-for property values
ValuesArray = new Array ("Button", true);
// Find all enabled buttons in the Replace dialog
buttons = w.FindAll(PropArray, ValuesArray, 5).toArray();
// Log the search results
if (buttons.length > 0)
{
for (i = 0; i < buttons.length; i++)
Log.Message(buttons[i].FullName);
Log.Message("Total number of found enabled buttons: " + buttons.length)
}
else
Log.Warning("No enabled buttons were found.");
}
Python
def FindEnabledButtons():
# Obtain the Notepad process
p = Sys.Process("notepad")
# Open the Replace dialog
p.Window("Notepad", "*").MainMenu.Click("Edit|Replace...")
w = p.Window("#32770", "Replace")
# Specify the sought-for property names
PropArray = ["WndClass", "Enabled"]
# Specify the sought-for property values
ValuesArray = ["Button", True]
# Find all enabled buttons in the Replace dialog
buttons = w.FindAllChildren(PropArray, ValuesArray, 5)
# Log the search results
if (len(buttons) > 0):
for i in range (0, len(buttons)):
Log.Message(buttons[i].FullName)
Log.Message("Total number of found enabled buttons: " + VarToStr(len(buttons)))
else:
Log.Warning("No enabled buttons were found.")
VBScript
Sub FindEnabledButtons
Dim p, w, buttons, i, PropArray, ValuesArray
' Obtain the Notepad process
Set p = Sys.Process("notepad")
' Open the Replace dialog
p.Window("Notepad", "*").MainMenu.Click("Edit|Replace...")
Set w = p.Window("#32770", "Replace")
' Specify the sought-for property names
PropArray = Array("WndClass", "Enabled")
' Specify the sought-for property values
ValuesArray = Array("Button", True)
' Find all enabled buttons in the Replace dialog
buttons = w.FindAll(PropArray, ValuesArray, 5)
' Log the search results
If UBound(buttons) >= 0 Then
For i = 0 To UBound(buttons)
Log.Message(buttons(i).FullName)
Next
Log.Message("Total number of found enabled buttons: " & (UBound(buttons) + 1))
Else
Log.Warning("No enabled buttons were found.")
End IfEnd Sub
DelphiScript
procedure FindEnabledButtons;var p, w, PropArray, ValuesArray, buttons, i;begin
// Obtain the Notepad process
p := Sys.Process('notepad');
// Open the Replace dialog
p.Window('Notepad', '*').MainMenu.Click('Edit|Replace...');
w := p.Window('#32770', 'Replace');
// Specify the sought-for property names
PropArray := ['WndClass', 'Enabled'];
// Specify the sought-for property values
ValuesArray := ['Button', true];
// Find all enabled buttons in the Replace dialog
buttons := w.FindAll(PropArray, ValuesArray, 5);
// Log the search results
if VarArrayHighBound(buttons, 1) >= 0 then
begin
for i := 0 to VarArrayHighBound(buttons, 1) do
Log.Message(buttons[i].FullName);
Log.Message('Total number of found enabled buttons: ' + aqConvert.VarToStr(VarArrayHighBound(buttons, 1) + 1))
end
else
Log.Warning('No enabled buttons were found.');end;
C++Script, C#Script
function FindEnabledButtons()
{
var p, w, PropArray, ValuesArray, buttons, i;
// Obtain the Notepad process
p = Sys["Process"]("notepad");
// Open the Replace dialog
p["Window"]("Notepad", "*")["MainMenu"]["Click"]("Edit|Replace...");
w = p["Window"]("#32770", "Replace");
// Specify the sought-for property names
PropArray = new Array ("WndClass", "Enabled");
// Specify the sought-for property values
ValuesArray = new Array ("Button", true);
// Find all enabled buttons in the Replace dialog
buttons = w["FindAll"](PropArray, ValuesArray, 5)["toArray"]();
// Log the search results
if (buttons["length"] > 0)
{
for (i = 0; i < buttons["length"]; i++)
Log["Message"](buttons[i]["FullName"]);
Log["Message"]("Total number of found enabled buttons: " + buttons["length"])
}
else
Log["Warning"]("No enabled buttons were found.");
}
|
Konvertides MyBB to phpBB tuleb selline veateade?
Kood: Vali kõik
General Error
SQL ERROR [ mysqli ]
Unknown column 'users.yahoo' in 'field list' [1054]
SQL
SELECT users.uid, users.website, users.yahoo, users.aim, users.icq, users.skype, userfields.fid1 FROM mybb_users users LEFT JOIN mybb_userfields AS userfields ON (users.uid = userfields.ufid) ORDER BY users.uid LIMIT 2000
BACKTRACE
FILE: (not given by php)
LINE: (not given by php)
CALL: msg_handler()
FILE: [ROOT]/phpbb/db/driver/driver.php
LINE: 855
CALL: trigger_error()
FILE: [ROOT]/phpbb/db/driver/mysqli.php
LINE: 194
CALL: phpbb\db\driver\driver->sql_error()
FILE: [ROOT]/phpbb/db/driver/mysql_base.php
LINE: 45
CALL: phpbb\db\driver\mysqli->sql_query()
FILE: [ROOT]/phpbb/db/driver/driver.php
LINE: 261
CALL: phpbb\db\driver\mysql_base->_sql_query_limit()
FILE: [ROOT]/install/install_convert.php
LINE: 1263
CALL: phpbb\db\driver\driver->sql_query_limit()
FILE: [ROOT]/install/install_convert.php
LINE: 214
CALL: install_convert->convert_data()
FILE: [ROOT]/install/index.php
LINE: 409
CALL: install_convert->main()
FILE: [ROOT]/install/index.php
LINE: 289
CALL: module->load()
|
overmind系统上线三个月,累计执行任务800+,自动审核执行SQL超过5000条,效率提升相当明显,离“一杯咖啡,轻松运维”的目标又进了一步。
起初在写overmind时就有考虑到之后的扩展,不仅仅是作为SQL自动审核执行的平台,更希望能将其打造成一个数据库自动化运维的专业系统,SQL自动审核执行作为第一个功能被开发了出来。三个月的使用后overmind得到了大家的认可,并且切切实实帮助我们节约了时间,这也给予了我这个非专业开发、半吊子DBA莫大的鼓励和信心。
日常工作中经常会接到把数据库整库或单表从生产环境导入到测试环境或测试A导入到测试B等数据库、表之间的数据互导需求,这类操作没有太高技术含量还费时费力容易出错,最适合做到自动化的流程中,这便是overmind要实现的第二个功能:工单+自动化数据迁移。
为什么需要工单?目前的流程都是通过邮件的方式,需求邮件到DBA,DBA执行导数据的操作。自动化的流程理论来说应该从头至尾都无需人工参与,但涉及到数据安全问题,还是需要DBA确认,所以加了工单。同时工单具有状态自助追踪,减少沟通成本等优点,后续也方便统计工单量等指标,以便优化服务与流程。同时为了能够保证工单及时被处理,我们每一步都会增加邮件和IM的通知,给用户最及时的反馈。
数据迁移的工单流程很简单,用户提交工单,DBA进行审核,审核通过系统自动执行迁移操作,审核不通过流程结束。流程图图如下:
流程中没有加入项目leader等的多层审核方式,主要是因为
数据库迁移主要是利用mysql的导入导出功能,核心的命令就一个
mysqldump -h 10.82.9.19 -P 3306 -uops -pcoffee --default-character-set=utf8 --single-transaction --databases dbname | mysql -h 192.168.106.91 -P 3306 -uops -pcoffee --default-character-set=utf8 dbname
以上命令是shell命令,在python下没有找到直接导入导出mysql数据的包,只能在python代码中调用shell命令,推荐使用subprocess模块,这个模块有着更加丰富的使用方法,方便获取最终的命令执行状态和输出结果,转换成完整的python类如下:
from subprocess import Popen, PIPE
class Cmd():
def __init__(self):
self.src_host = '10.82.9.19'
self.src_port = 3306
self.src_database = 'dbname'
self.des_host = '192.168.106.91'
self.des_port = 3306
self.des_database = 'dbname'
self.tables = 'all'
self.username = 'ops'
self.password = 'coffee'
def migration(self):
# 利用mysqldump命令备份
dump = "mysqldump -h %s -P %d -u%s -p%s --default-character-set=utf8 --single-transaction --databases %s" % (
self.src_host, self.src_port, self.username, self.password, self.src_database
)
# 如果是对表的导出则加上表名,是个字符串'table1 table2 table3'
if self.tables != 'all':
dump += ' --tables %s' % self.tables
# 利用mysql命令导入
mysql = "mysql -h %s -P %d -u%s -p%s --default-character-set=utf8 %s" % (
self.des_host, self.des_port, self.username, self.password, self.des_database
)
# 执行导出导入shell命令
process = Popen("%s | %s" % (dump, mysql), stderr=PIPE, shell=True)
process_stdout = process.communicate()
# 判断shell命令执行结果状态
if (process.returncode == 0):
print('迁移成功!')
else:
print(process_stdout[1].decode('utf8').strip())
Cmd().migration()
这里采用了shell中的管道,管道用|符号分割两个命令,管道符前的命令正确输出作为管道符后命令的输入,好处是不需要生成单独的sql文件存放在磁盘上,也就不需要考虑文件删除,占用磁盘的问题,缺点是导出大的数据库时可能会造成OOM,这个要根据自身情况综合权衡。
导数据属于耗时操作,在web中应异步执行,这里采用了Celery来处理,这篇文章Django配置Celery执行异步任务和定时任务有详细介绍Django中Celery的使用
工单列表页:普通用户只显示自己提交的工单,工单状态一目了然,还有实用的搜索功能
提交工单页:overmind维护了一份数据库列表,供系统里所有的功能使用,这里也不例外
工单审核页:审核页和详情页其实是同一个页面,只是根据工单不同的状态展示不同的元素
工单详情页:这里会详细记录这个工单的所有信息,提交、审核、执行的整个过程完整状态
|
Let’s explain it with a python example.
You can run the scripts below, please remember to change the IP address before you run, please note the xArm will move forward 100mm on X aixs.
import os
import sys
sys.path.append(os.path.join(os.path.dirname(__file__), '../../..'))
from xarm.wrapper import XArmAPI
arm = XArmAPI('192.168.1.100')
arm.motion_enable(enable=True)
arm.set_mode(0)
arm.set_state(state=0)
arm.reset(wait=True)
#move to point A, get IK of PointC at Point A
arm.set_position(207,0,112,180,0,0,wait=True)
IK_pointA_C=arm.get_inverse_kinematics([307,0,112,180,0,0])
#move to point B, get IK of PointC at Point B
arm.set_position(257,0,112,180,0,0,wait=True)
IK_pointB_C=arm.get_inverse_kinematics([307,0,112,180,0,0])
# #move to point C, get IK of PointC at Point C
arm.set_position(307,0,112,180,0,0,wait=True)
IK_pointC_C=arm.get_inverse_kinematics([307,0,112,180,0,0])
arm.reset(wait=True)
print("IK_pointA_C_joints",IK_pointA_C[1])
arm.set_servo_angle(angle=IK_pointA_C[1],wait=True)
print("real_pointA_C_position",arm.get_position())
print("IK_pointB_C_joints",IK_pointB_C[1])
arm.set_servo_angle(angle=IK_pointB_C[1],wait=True)
print("real_pointB_C_position",arm.get_position())
print("IK_pointC_C_joints",IK_pointC_C[1])
arm.set_servo_angle(angle=IK_pointC_C[1],wait=True)
print("real_pointC_C_position",arm.get_position())
Here is a image to show the relationship of point A/B/C
It suppose to be:
IK_pointA_C_joints=IK_pointB_C_joints=IK_pointC_C_joints
real_pointA_C_position=real_pointB_C_position=real_pointC_C_position
But indeed it’s not.
The IK does not converge if the robot is at point A and you are trying to get the IK of point C which is 100mm far away from point A.
If you need get the correct IK of the point C, the position where you get the IK of point C should be very close to the point C, for instance, in 10mm of point C.
Here is an example, the point D is 5mm from the point C.
import os
import sys
sys.path.append(os.path.join(os.path.dirname(__file__), '../../..'))
from xarm.wrapper import XArmAPI
arm = XArmAPI('192.168.1.100')
arm.motion_enable(enable=True)
arm.set_mode(0)
arm.set_state(state=0)
arm.reset(wait=True)
#move to point A, get IK of PointC at Point A
arm.set_position(207,0,112,180,0,0,wait=True)
IK_pointA_C=arm.get_inverse_kinematics([307,0,112,180,0,0])
#move to point B, get IK of PointC at Point B
arm.set_position(257,0,112,180,0,0,wait=True)
IK_pointB_C=arm.get_inverse_kinematics([307,0,112,180,0,0])
# #move to point C, get IK of PointC at Point C
arm.set_position(307,0,112,180,0,0,wait=True)
IK_pointC_C=arm.get_inverse_kinematics([307,0,112,180,0,0])
# #move to point D, get IK of PointC at Point D
arm.set_position(302,0,112,180,0,0,wait=True)
IK_pointD_C=arm.get_inverse_kinematics([307,0,112,180,0,0])
arm.reset(wait=True)
print("IK_pointA_C_joints",IK_pointA_C[1])
arm.set_servo_angle(angle=IK_pointA_C[1],wait=True)
print("real_pointA_C_position",arm.get_position())
print("IK_pointB_C_joints",IK_pointB_C[1])
arm.set_servo_angle(angle=IK_pointB_C[1],wait=True)
print("real_pointB_C_position",arm.get_position())
print("IK_pointC_C_joints",IK_pointC_C[1])
arm.set_servo_angle(angle=IK_pointC_C[1],wait=True)
print("real_pointC_C_position",arm.get_position())
print("IK_pointD_C_joints",IK_pointD_C[1])
arm.set_servo_angle(angle=IK_pointD_C[1],wait=True)
print("real_pointD_C_position",arm.get_position())
Now we get:
IK_pointC_C_joints=IK_pointD_C_joints
real_pointC_C_position=real_pointD_C_position
|
Konvertides MyBB to phpBB tuleb selline veateade?
Kood: Vali kõik
General Error
SQL ERROR [ mysqli ]
Unknown column 'users.yahoo' in 'field list' [1054]
SQL
SELECT users.uid, users.website, users.yahoo, users.aim, users.icq, users.skype, userfields.fid1 FROM mybb_users users LEFT JOIN mybb_userfields AS userfields ON (users.uid = userfields.ufid) ORDER BY users.uid LIMIT 2000
BACKTRACE
FILE: (not given by php)
LINE: (not given by php)
CALL: msg_handler()
FILE: [ROOT]/phpbb/db/driver/driver.php
LINE: 855
CALL: trigger_error()
FILE: [ROOT]/phpbb/db/driver/mysqli.php
LINE: 194
CALL: phpbb\db\driver\driver->sql_error()
FILE: [ROOT]/phpbb/db/driver/mysql_base.php
LINE: 45
CALL: phpbb\db\driver\mysqli->sql_query()
FILE: [ROOT]/phpbb/db/driver/driver.php
LINE: 261
CALL: phpbb\db\driver\mysql_base->_sql_query_limit()
FILE: [ROOT]/install/install_convert.php
LINE: 1263
CALL: phpbb\db\driver\driver->sql_query_limit()
FILE: [ROOT]/install/install_convert.php
LINE: 214
CALL: install_convert->convert_data()
FILE: [ROOT]/install/index.php
LINE: 409
CALL: install_convert->main()
FILE: [ROOT]/install/index.php
LINE: 289
CALL: module->load()
|
[style] Choose between multiple inheritance
Dear Sage people,
I want to create a new (mathematical) object that sometimes is an Expression and sometimes a SymbolicFunction, depending on the arguments. You can think of this for example like $f(a, b, t) = \int_0^t a^b e^{-x^2} dx$. For special values of $t$ I would like to see it as an Expression ($t=0$ or $t=\infty$), but in all other cases I want it to be a BuiltinFunction (or something alike).
In Sage I can do something like:
class MyObjectExpression(Expression):
def __init__(self, a, b, t):
Expression.__init__(self, integral(a**b*e**(-x**2), x, 0, t))
# More (override) stuff below
class MyObjectFunction(BuiltinFunction):
def __init__(self, a, b, t):
BuiltinFunction.__init__(self, 'f(a,b,t)', nargs=1)
# More (override) stuff below
def MyObject(a, b, t):
if t == 0 or t == infty:
return MyObjectExpression(a, b, t)
else:
return MyObjectFunction(a, b, t)
Is it possible to combine these three things into one class? So I want to create a class which is sometimes an Expression and sometimes an much more abstract class, is this possible?
Best, Noud
Edit:What I actually want to do is programming Askey-Wilson polynomials and give them extra options, like a three term recurrence relation. But this depends on $n$. I already programmed this.
class Askey_Wilson(SageObject):
def __init__(self, SR, n, z, a, b, c, d, q):
self.n = n
self.z = z
self.q = q
self.a = a
self.b = b
self.c = c
self.d = d
self.param = [a, b, c, d]
if self.n in ZZ:
self.I = self.evaluate()
else:
self.I = var('askey_wilson')
def __repr__(self):
return 'p_%i(%s;%s,%s,%s,%s|%s)' % (
self.n, self.z, self.a, self.b, self.c, self.d, self.q
)
def evaluate(self):
n, q, z, a, b, c, d = [self.n, self.q, self.z] + self.param
lc = qPochhammerSymbol(SR, [a*b, a*c, a*d], q, n) / a**n
poly = BasicHypergeometricSeries(SR,
[q**(-n), a*b*c*d*q**(n-1), a*z, a*z**(-1)],
[a*b, a*c, a*d], q, q)
return lc*poly
def three_term_recurrence(self):
A, B, C = 0, 0, 0
# compute three term recurrence relation
return A, B, C
But now every time I want to know the explicit value of the Askey-Wilson polynomials I have to call askey_wilson.I. I want to get rid of the I.
|
See more at Fast Company.
Mariya Pylayev produces videos for Nexus Media.
When you paste this story into your backend, you will find a snippet of javascript at the bottom that looks like the code below. This is the tracking pixel. It is a commonly used tool that will allow us to measure the reach of our work. If you prefer to copy the tracking pixel separately, here it is:
<script type="application/javascript">
window.addEventListener('DOMContentLoaded', (event) => {
var img = document.createElement('img');
var src = 'https://www.google-analytics.com/collect?v=1';
src += '&tid=UA-172916447-1';
src += '&cid=1';
src += '&t=pageview';
src += '&dl=' + encodeURIComponent('https://nexusmedianews.com/explaining-coral-bleaching-the-latest-disaster-destroying-our-oceans-968fad1df1d/');
src += '&dt=' + encodeURIComponent('Explaining Coral Bleaching, The Latest Disaster Destroying Our Oceans');
src += '&dr=' + encodeURIComponent(window.location.href);
img.src = src;
img.width = 1;
img.height = 1;
img.setAttribute('style', 'display: none;');
img.setAttribute('aria', 'hidden');
img.setAttribute('role', 'presentation');
img.alt = '';
document.body.appendChild(img);
});
</script>
|
繧�縺」縺ィ縲∬ィ育ョ怜鴨蟄ヲ謚�陦楢��隧ヲ鬨薙′邨ゅo縺」縺溘��12/15縲ょ崋菴薙�∵険蜍輔�∵オ∽ス薙→縺ゅk縺後�∽サ雁ケエ縺ッ謖ッ蜍輔r蜿励¢縺溘�よ擂蟷エ縺ッ蝗コ菴薙��
譚・蟷エ縺ォ蜷代¢縺ヲ縲∝崋菴鍋畑縺ョ繝輔ぃ繧、繝ォ繧剃ス懊▲縺溘��
蝗コ菴薙�ョ繝輔ぃ繧、繝ォ繧剃ス懊▲縺滓凾縺ォ縲∫判蜒上ヵ繧。繧、繝ォ繧偵Μ繝阪�シ繝�縺励◆繧翫け繝ュ繝�繝励@縺溘��
謳コ蟶ッ縺ァ隱ュ縺ソ繧�縺吶¥縺ェ縺」縺溘��
莉雁ケエ縺ッ縲∝女鬨楢ウ�譬シ縺ョ繧ス繝輔ヨ繧ヲ繧ァ繧「菴ソ逕ィ邨碁ィ薙r貅�縺溘☆縺溘a縲�騾壻ソ。隰帛コァ縺ォ譎る俣繧偵°縺代◆縲ゅ◎縺ョ縺溘a縲∬ゥヲ鬨灘級蠑キ縺後♀繧阪◎縺九↓縺ェ縺」縺溘�ょ女縺九▲縺溘°縺ゥ縺�縺玖�ェ菫。縺ッ縺ェ縺�縲�
陬∵妙縺吶k
蝠城。碁寔縺ッ阮�縺�縲ゅ□縺九i陬∵妙讖溘〒繧ょ�繧後k縲�
陬∵妙縺ッ縲∵悽譁�縺悟�繧後◆繧翫@縺ェ縺代l縺ー菴輔〒繧ゅ>縺�縲�
蛻�蜴壹>譛ャ縺ェ繧峨�ー縲√°繧薙↑縺ァ閭瑚。ィ邏吶r蜑翫▲縺ヲ繧ゅ>縺�縲ゅ°繧薙↑縺後≠繧後�ー縺ョ隧ア縲�
縺九s縺ェ縺ッ縲∵。亥、門ョ峨>縲ょ承縺ョ陬∵妙讖溘��2011蟷エ鬆�雋キ縺」縺溘i10000菴阪@縺溘h縺�縺ェ豌励′縺励◆縲よー励�ョ縺帙>縺九b縺励l縺ェ縺�縺後�ゆク�蝗槭〒謨ー蜊∵椢縺励°蛻�繧後↑縺�縺ョ縺ァ縲�驥榊感蜒阪↓縺ェ繧九��
蠕後�ョ蜉�蟾・繧定��縺医k縺ィ縲√%縺ョ譎らせ縺ァ縲∽ス咏區繧貞�繧雁叙縺」縺ヲ縺励∪縺」縺滓婿縺後h縺九▲縺溘�ョ縺九b縺励l縺ェ縺�縲�
繧ケ繧ュ繝」繝ウ縺吶k
繧ケ繧ュ繝」繝翫�シ縺ッ縲�驕ゥ蠖薙��
隱ュ縺ソ蜿悶j騾溷コヲ縺ィ縺九≠繧九�ョ縺九b縺励l縺ェ縺�縺代←縲√h縺上o縺九i縺ェ縺�縲�
荳�譫壻ク�譫壹�∵焔縺ァ邏吶r蜈・繧梧崛縺医k繧�縺、縺ッ縲√d繧√◆縺サ縺�縺後>縺�縲�
邏吶′閾ェ蜍慕噪縺ォ縲∝精縺�霎シ縺セ繧後※縺上d縺、縺後>縺�縲�
邊セ蠎ヲ縺ッ縲√ヵ繧。繧、繝ォ繧オ繧、繧コ縺ィ隱ュ縺ソ繧�縺吶&縺九i讀懆ィ弱��
荳臥ィョ鬘槭≠繧九��
繝代ち繝代ち縺吶k繧�縺、縲�
蜷ク縺�霎シ繧�繧�縺、
譛ャ縺ョ縺セ縺セ縲√a縺上▲縺ヲ蜀咏悄繧偵→繧九d縺、縲�
謇九〒襍ー譟サ縺吶k繧�縺、縲�
逕サ蜒上r繧ッ繝ュ繝�繝励☆繧�
謨ー逋セ蝠上r謇倶ス懈・ュ縺ァ繧ッ繝ュ繝�繝励☆繧九��
蝠上��-縲�縺」縺ヲ縺ィ縺薙m繧定ェ崎ュ倥@縺ヲ繧ッ繝ュ繝�繝励→縺九〒縺阪l縺ー縺�縺�縺代←縲√◎繧薙↑繝�繧ッ繝九ャ繧ッ縺ッ遏・繧峨↑縺�縲ゅ〒縺阪↑縺�縲�
陬∵妙縺吶k縺ィ縺阪↓縲∽ス咏區繧貞�繧雁叙縺」縺ヲ縺励∪縺」縺滓婿縺後h縺九▲縺溘�ョ縺九b縺励l縺ェ縺�縲�
竊� 繧オ繝ウ繝励Ν縺ィ縺励※縲∽ス懊▲縺溘b縺ョ繧定シ峨○繧九�よ枚遶�縺ォ諢丞袖縺ッ縺ェ縺�縲�
繝ッ繝シ繝峨〒菴懊▲縺溘h縺�縺ェ蝠城。碁寔縺�縺」縺溘�ョ縺ァ縲√Ρ繝シ繝峨〒菴懊▲縺ヲ縺ソ縺溘�ゅ°縺ェ繧雁�咲樟諤ァ鬮倥>縺ィ諤昴≧縲�
繝輔ぃ繧、繝ォ蜷阪�ッ縲∝撫鬘娯�貞屓遲秘��
邏吶�ョ譛ャ縺ョ縲∝撫鬘碁寔縺ッ縲∝撫鬘後ヱ繝シ繝医→縲∝屓遲斐ヱ繝シ繝医′蛻�縺九l縺ヲ縺励∪縺」縺ヲ縺�縺溽ぜ縲√>縺。縺�縺。繧√¥繧九�ョ縺悟、ァ螟峨□縺」縺溘��
蝠城。娯�貞屓遲披�貞撫鬘娯�貞屓遲披�ヲ縺薙�ョ鬆�逡ェ縺ォ荳ヲ縺ケ譖ソ縺医k縲�
蝠城。後r螂�謨ー縲∝屓遲斐r蛛カ謨ー縺ァ菫晏ュ倥☆繧九→萓ソ蛻ゥ縲�
蝠城。碁寔縺ッ縲�鬆�逡ェ騾壹j縺ョ繝輔ぃ繧、繝ォ蜷阪↓縺励↑縺代l縺ー縲∬ェュ縺ソ縺・繧峨>縲�
繝ェ繝阪�シ繝�縺吶k繝槭け繝ュ
繝ェ繝阪�シ繝�縺吶k繝槭け繝ュ繧剃ス懊▲縺溘��
荳九�ョ繝槭け繝ュ縺ッ縲∝�カ謨ー螂�謨ー繧定��縺医※縺�縺ェ縺�縲ょー代@譖ク縺咲峩縺輔↑縺�縺ィ縲∝�カ螂�縺ァ菫晏ュ倥〒縺阪↑縺�縲�
os.rename(j, dirnm + r窶�/窶� + 窶�%05.f窶�%i + 窶�%s窶�%(j)[-4:])
縺ョi縺ョ驛ィ蛻�繧偵��2i縺ィ縺九��2i+1縺ォ縺吶l縺ー縲∽ス輔→縺九↑繧九°繧ゅ�ゅh縺上≠繧九d繧頑婿縲�
菴ソ逕ィ縺吶k縺ィ縲√ヵ繧ゥ繝ォ繝�蜀�縺ョ蜈ィ縺ヲ縺ョ繝輔ぃ繧、繝ォ蜷阪′騾」逡ェ縺ォ螟画峩縺輔l縺ヲ縺励∪縺�縲∵$繧阪@縺�繝槭け繝ュ縺ァ縺吶�ょ��縺ォ謌サ縺呎婿豕輔�ッ辟。縺�縲�
螳溯。後☆繧九→縲√ヵ繧ゥ繝ォ繝�驕ク謚槭ム繧、繧「繝ュ繧ー縺瑚。ィ遉コ
驕ク繧薙□繝輔か繝ォ繝�蜀�縺ョ縺吶∋縺ヲ縺ョ繝輔ぃ繧、繝ォ縺�縲√ヵ繧。繧、繝ォ蜷埼��縺ォ繧ス繝シ繝医&繧後※縺九i縲�5譯√�ョ0蝓九a縺ョ騾」逡ェ縺ァ繝ェ繝阪�シ繝�縺輔l繧九��
# coding: utf-8
import Tkinter
import tkFileDialog
import glob
import os
dirnm = tkFileDialog.askdirectory(title="Caution! carefully use.")
fnames = glob.glob(dirnm + r"/" + "*")
fnames = sorted(fnames)
lfname = len(fnames)
for i,j in enumerate(fnames):
print("%05.f"%i + "%s"%(j)[-4:])
os.rename(j, dirnm + r"/" + "%05.f"%i + "%s"%(j)[-4:])
邵ヲ髟キ縺ョ繝壹�シ繧ク縺後�∬ェュ縺ソ縺・繧峨>
蜃コ譚・荳翫′縺」縺溽判蜒上ヵ繧。繧、繝ォ縺ッ縲√せ繝槭�帙r讓ェ縺ォ縺励※隱ュ縺ソ縺溘>縲�
縺�縺九i縲∫判蜒上ヵ繧。繧、繝ォ縺ッ縲∵ィェ髟キ縺後>縺�縲よィェ髟キ縺ォ縺励※縲√〒縺阪k縺�縺大、ァ縺阪>譁�蟄励r隕九◆縺�縲�
邵ヲ讓ェ豈斐�ッ縲√せ繝槭�帙↓縺エ縺」縺溘j縺ョ16:9縺後>縺�縲�
縺ェ縺ョ縺ォ蝠城。後↓繧医▲縺ヲ縺ッ縲∫クヲ縺ォ髟キ縺�逕サ蜒上↓縺ェ縺」縺ヲ縺励∪縺�縲�
迚ケ縺ォ蝗樒ュ斐��
縺昴s縺ェ譎ゅ�ョ縺溘a縺ォ縲∫判蜒上r繧ッ繝ュ繝�繝励☆繧却ython縺ョ繝槭け繝ュ繧剃ス懊▲縺溘��
菴ソ逕ィ縺吶k縺ィ縲√ヵ繧ゥ繝ォ繝�蜀�縺ョ蜈ィ縺ヲ縺ョjpg繝輔ぃ繧、繝ォ縺�16:9縺ォ螟画峩竊貞�蜑イ縺輔l縺ヲ縺励∪縺�縲∵$繧阪@縺�繝槭け繝ュ縺ァ縺吶�� 縺九→縺�縺」縺ヲ蜈�縺ョ繝輔ぃ繧、繝ォ繧呈カ医☆繧上¢縺ァ繧ゅ↑縺�縺ョ縺ァ縲√◎縺薙∪縺ァ諤悶¥縺ェ縺�縺九b縲�
# coding:utf-8
from PIL
import Image
import os, glob
import tkFileDialog
fld = tkFileDialog.askdirectory()
list = glob.glob(fld + "/" + ur"*.png") + glob.glob(fld + "/" + ur"*.jpg")
print list
for i in list:
im = Image.open('%s'%(i))
im_width, im_height = im.size
im_new_height = im_width / 16 * 9#縦のピクセル数 pixcel数の指定がしたいので、intの方が都合がいい。
last_height = im_height - im_new_height
if im_new_height*1.2 < im_height:#想定よりも1.2倍縦長ならば...
div_num = im_height / im_new_height
sft = (im_height - last_height) / div_num
if sft<im_new_height*1.7:
div_num += 1
else:
div_num += 2
shift = last_height / (div_num-1)
for i2 in range(div_num):
xl, yu, xr, yl = 0, i2*shift, im_width, im_new_height+i2*shift
im_crop = im.crop((xl, yu, xr, yl))
print i
print i2
im_crop.save('%s_%02.f%s'%(i[:-4], i2, i[-4:]))
print('%s_%02.f%s'%(i[:-4], i2, i[-4:]))
#print("%s, %s, %s, %s"%(xl, yu, xr, yl))
im_crop.close()
else:
print('%s'%(i))
im.close()
蝠上�ョ逕サ蜒擾シ医☆縺ァ縺ォ讓ェ髟キ�シ峨↓髢「縺励※縺ッ縲∽ス輔b蜃コ縺ヲ縺薙↑縺�縲�
隗」遲斐�ョ逕サ蜒擾シ育クヲ髟キ�シ峨�ッ縲∝�蜑イ縺輔l縺溘ヵ繧。繧、繝ォ縺悟�コ縺ヲ縺上k縲ょ��縺ョ逕サ蜒上�ッ縺昴�ョ縺セ縺セ縲�
繝輔ぃ繧、繝ォ蜷阪�ッ00001_00, 00001_01, 00001_02, 00001_03, 縺ィ縺ェ縺」縺ヲ縺�繧九�ッ縺壹��
縺薙s縺ェ諢溘§縲�
荳�譌・20繝輔ぃ繧、繝ォ�シ�10蝠擾シ峨r逶ョ螳峨↓縲√d縺」縺ヲ縺�縺薙≧縺ィ諤昴▲縺ヲ縺�繧九��
|
At a glance: Use URIs to get aggregate reports in CSV files.
Are you looking for Pull API raw data?
Pull API aggregate data characteristics
Reports return as CSV files.
Data freshness rates are the same as the equivalent report on the Export Data page.
Filter by options available: Media source and date range.
Additional capabilities in Pull API are:
Ability to filter by attribution touch type
Selectable timezone
Ability to filter by
Pull API is suited to use by team members and BI developers;
Category UA Retargeting* Protect360
Partners (media source) ✓ ✓ ✓
Partners by date
✓ ✓ ✓
Daily
✓ ✓ ✓
Geo
✓ ✓ ✓
Geo by date
✓ ✓ ✓
* For retargeting reports, add the
Related reading:
Terminology
Term Description
Pull API
Solution for downloading CSV reports using URIs.
API call or call
Sending the URI to AppsFlyer by pasting it in the browser address bar or by using scripts.
URI
Guide for team members
About URI templates
URI templates available in the dashboard are populated with the app ID and report type.
They have place holders for the API V1.0 token and from/to dates which you need to edit.
The portion of the URI to the right of the question mark (?) contains parameters. Each parameter begins with an ampersand (&).&media_source=facebook
To get a better understanding of Pull API, complete the tutorial that follows.
Getting your first Pull API report tutorial
Before you begin:
Ask the admin to provide you with the V1.0 token.
To download a report from the dashboard:
Go to Integration>API access.The API access page opens.
Select a report type. For example, Performance reports>The URI template displays.Partners daily report.
Copy the URI by clicking on it.
Open a new tab in your browser, paste the URI.
Edit the URI:
Replace the token placeholder with the Pull API token provided by the admin.
Example: Replace the token placeholder so that&api_token=12345678-1234-1234-1234-123456789012Note!There are no spaces or other punctuation.
Replace the from/toplaceholders with dates.
Example:&from=2020-01-20&to=2020-01-31Note!There are no spaces. Don't delete the &.
Replace the token placeholder with the Pull API token provided by the admin.
Click <Enter> to send the API call.
The report downloads.
Addtional parameters can be set to customize reports, for example, select a specific media source, to return retargeting data, etc. The section that follows contains the list of parameters available.
Aggregate data Pull API parameters
Aggregate report URI and parameters
Parameter Description
api_token V1.0 API token. In example calls, this is shown as: <API TOKEN HERE>.
from
to End date. As for from
Parameter Description
media_source
Use to limit (filter) to a specific media source.
attribution_touch_type
Set this parameter as shown in the example to get view-through attribution (VTA) KPIs.
Example:
currency
Currency of revenue and cost
Aggregate Pull API reports always use the app-specific currency.
reattr
Get retargeting conversions data.
timezone
[Default] Data returns using UTC.
Notes about selecting timezones
Google Ads filtered report
https://hq.appsflyer.com/export/com.greatapp/partners_report/v5?api_token=xxxx
&from=2018-04-09&to=2018-05-09&media_source=googleadwords_int
Facebook filtered report
https://hq.appsflyer.com/export/com.greatapp/partners_report/v5?api_token=xxxx
&from=2018-04-09&to=2018-05-09&media_source=facebook
parameter Description
URI
pid
To filter the report by a specific media source use the
timezone
Selects the timezone used to return data.
If
Templates including the
Example:
KPIs
Protect360 parameters are the same in Pull API and Master API.
View-through attribution (VTA) KPIs
To get the VTA KPIs, add the parameter attribution_touch_type=impressionto the Pull API aggregate report URI as detailed in the example.
You can use the parameter with any of the aggregate reports available. Just copy the URI from the user interface, and append the parameter.
You can also add the &media_sourceparameter to limit the report to a specific media source as depicted in the example that follows.
Some VTA KPIs, like clicks, impressions, and cost APIs, don't have values associated with them and display the value N/A instead.
Example Example URI
VTA only https://hq.appsflyer.com/export/{app_id}/partners_report/v5?api_token={API token}&from=yyyy-mm-dd&to=yyyy-mm-dd
VTA and media source
https://hq.appsflyer.com/export/{app_id}/partners_report/v5?api_token={API token}&from=yyyy-mm-dd&to=yyyy-mm-dd
Pull API for developers
Principles of implementation
Prerequisite:
Familiarize yourself with the Pull API guide for team members.
Consider:
For each report type available, there is a template URI in the dashboard. Go to Integration>API access.
You modify the template to get the data you need. For example, by setting date ranges and filter by parameters.
The parameters for raw data and aggregate data reports differ and are detailed in the report sections.
Path
Path parameters
HTTP method
Parameter Description
Example URI
api_token
Other parameters
Parameters differ depending
Example
URI call example includes additional parameters:
https://hq.appsflyer.com/export/example.app.com/installs_report/v5?
api_token={Account owner API key should be used}&from=yyyy-mm-dd
&to=yyyy-mm-dd&additional_fields=keyword_id,store_reinstall,
deeplink_url,oaid,install_app_store,contributor1_match_type,
contributor2_match_type,contributor3_match_type,match_type
Example scripts
Integrate Pull API into scripts to retrieve data.
As needed, edit the scripts in terms of report type, date range, and filters.
These examples use the installreport.
import okhttp3.*;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.util.concurrent.TimeUnit;
public class PullApi {
public static void main(String[] args){
String appID = "<APP_ID>";
String reportType = "<REPORT_TYPE>";
String apiToken = "<API_TOKEN>";
String from = "<FROM_DATE>";
String to = "<TO_DATE>";
String requestUrl = "https://hq.appsflyer.com/export/" + appID + "/" + reportType + "/v5?api_token=" + apiToken + "&from=" + from + "&to=" + to;
OkHttpClient client = new OkHttpClient.Builder()
.connectTimeout(30, TimeUnit.SECONDS)
.readTimeout(30, TimeUnit.SECONDS)
.build();
Request request = new Request.Builder()
.url(requestUrl)
.addHeader("Accept", "text/csv")
.build();
try {
Response response = client.newCall(request).execute();
if(response.code() != 200) {
if(response.code() == 404) {
System.out.println("There is a problem with the request URL. Please make sure it is correct");
}
else {
assert response.body() != null;
System.out.println("There was a problem retrieving the data: " + response.body().string());
}
} else {
assert response.body() != null;
String data = response.body().string();
BufferedWriter writer;
writer = new BufferedWriter(new FileWriter(appID + "-" + reportType + "-" + from + "-to-" + to + ".csv"));
writer.write("");
writer.write(data);
writer.close();
}
System.exit(0);
} catch (Exception e) {
e.printStackTrace();
System.exit(1);
}
}
}
request = require('request');
const fs = require('fs');
const appID = '<APP_ID>';
const reportType = '<REPORT_TYPE>';
const apiToken = '<API_TOKEN>';
const from = '<FROM_DATA>';
const to = '<T0_DATE>';
const requestUrl = `https://hq.appsflyer.com/export/${appID}/${reportType}/v5?api_token=${apiToken}&from=${from}&to=${to}`;
request(requestUrl, (error, response, body) => {
if (error) {
console.log('There was a problem retrieving data:', error);
}
else if (response.statusCode != 200) {
if (response.statusCode === 404) {
console.log('There is a problem with the request URL. Make sure that it is correct');
} else {
console.log('There was a problem retrieving data:', response.body);
}
} else {
fs.writeFile(`${appID}-${reportType}-${from}-to-${to}.csv`, response.body, (err) => {
if (err) {
console.log('There was a problem writing to file: ', err);
} else {
console.log('File was saved');
}
});
}
});
import requests
app_id = '<APP_ID>'
report_type = '<REPORT_TYPE>'
params = {
'api_token': '<API_TOKEN>',
'from': 'FROM_DATE',
'to': 'TO_DATE'
}
request_url = 'https://hq.appsflyer.com/export/{}/{}/v5'.format(app_id, report_type)
res = requests.request('GET', request_url, params=params)
if res.status_code != 200:
if res.status_code == 404:
print('There is a problem with the request URL. Make sure that it is correct')
else:
print('There was a problem retrieving data: ', res.text)
else:
f = open('{}-{}-{}-to-{}.csv'.format(app_id, report_type, params['from'], params['to']), 'w', newline='', encoding="utf-8")
f.write(res.text)
f.close()
using System;
using RestSharp;
using System.Text;
using System.Net;
using System.IO;
namespace Pull_API
{
class PullAPi
{
static void Main(string[] args)
{
var appID = "<APP_ID>";
var reportType = "<REPORT_TYPE>";
var apiToken = "<API_TOKEN>";
var from = "<FROM_DATE>";
var to = "<TO_DATE>";
var requestUrl = "https://hq.appsflyer.com/export/" + appID + "/" + reportType + "/v5?api_token=" + apiToken + "&from=" + from + "&to=" + to;
var client = new RestClient(requestUrl);
var request = new RestRequest(Method.GET);
request.AddHeader("Accept", "text/csv; charset=UTF-8");
IRestResponse response = client.Execute(request);
HttpStatusCode statusCode = response.StatusCode;
int numericStatusCode = (int)statusCode;
if(numericStatusCode != 200){
if(numericStatusCode == 404){
Console.WriteLine("There is a problem with the request URL. Make sure that it is correct.");
} else {
Console.WriteLine("There was a problem retrieving data: " + response.Content);
}
} else {
System.IO.File.WriteAllText(@"" + appID + "-" + reportType + "-" + from + "-to-" + to + ".csv", response.Content);
Console.WriteLine("Data retrieved succesfully");
}
}
}
}
<?
$appID = '<APP_ID>';
$reportType = '<REPORT_TYPE>';
$apiToken = '<API_TOKEN>';
$from = '<FROM_DATE>';
$to = '<TO_DATE>';
$query = http_build_query([
'api_token' => $apiToken,
'from' => $from,
'to' => $to
]);
$requestUrl = 'https://hq.appsflyer.com/export/' . $appID . '/' . $reportType . '/v5?'.$query;
$report = $appID . '-' . $report . '-' . $from . '-to-' . $to;
$curl = curl_init($requestUrl);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_ENCODING, "");
curl_setopt($curl, CURLOPT_NOSIGNAL, true);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($curl, CURLOPT_MAXREDIRS, 10);
curl_setopt($curl, CURLOPT_FAILONERROR, true);
curl_setopt($curl, CURLOPT_TIMEOUT, 100);
curl_setopt($curl, CURLOPT_CUSTOMREQUEST, "GET");
curl_setopt($curl, CURLOPT_HTTPHEADER, array(
"cache-control: no-cache",
"Accept: text/csv; charset=UTF-8"
));
$response = curl_exec($curl);
$info = curl_getinfo($curl);
$err = curl_error($curl);
curl_close($curl);
var_dump($response);
if ($err) {
echo $info['http_code'];
echo "cURL Error #: " . $err . '. ';
if ($info['http_code'] == 404) {
echo 'There is a problem with the request URL. Make sure that it is correct';
}
if ($info['http_code'] == 401) {
echo 'There was a problem retrieving data: authentication failed.';
}
echo PHP_EOL;
} else {
$fp = fopen($report, 'w+');
fwrite($fp, $response);
fclose($fp);
echo $response;
}
?>
Additional information
Differences between Pull API V4 and V5.
Raw data: API V4 is still available for use. No changes are made to file formats and headers.
Aggregate data (V5):
In V5.0, the following additional fields are provided when the media_source=facebook:
Campaign ID
Adset name
Adset Id
Ad (Adgroup) Name
Ad (Adgroup) Id
Traits and limitations
Trait Comments
API token type required V1.0 token
Ad network access N
Agency access Y
Agency transparency Y
App-specific currency Y
App-specific timezone Y
Data freshness Realtime
Historical data Y
Non-organic data Y
Organic data Y
Rate limitations
Size limitations
Campaign name changes Pull API reports don't support campaign name changes
API error codes and troubleshooting
Status Code Symptom/message Solution
OK 200 Empty CSV file
OK
200
No API token found in the URI
Bad request
400
Raw Reports historical lookback is limited to 90 days.
Use
Bad request
400
Your API calls limit has been reached for report type
-
Unauthorized
401
Supplied API token is invalid
Ask the admin for the current token
Unauthorized
401
Account may be suspended.
Log in to the dashboard and check the account status.
Not found
404
AppsFlyer 404 error message page displays
|
Our API allows you to submit URLs for scanning and retrieve the results once the scan has finished. Furthermore, you can use the API for searching existings scans by attributes such as domains, IPs, Autonomous System (AS) numbers, hashes, etc. To use the API, you should create a user account, attach an API key and supply it when calling the API. Unauthenticated users only received minor quotas for API calls.
Scans on our platform have one of three visibility levels, make sure to use the appropriate level for your application:
Public Scan is visible on the frontpage and in the public search results and info pages.
Unlisted Scan is not visibile on the public page or search results, but is visible to vetted security researchers and security companies in our urlscan Pro platform. Use this if you want to submit malicious websites but are concerned that they might contain PII or non-public information.
Private Scan is only visible to you in your personalised search or if you share the scan ID with third parties. Scans will be deleted from our system after a certain retention period. Use this if you don't want anyone else to see the URLs you submitted.
To get started with our API, check out one of the existing tools and integrations for urlscan.io
These are some general pieces of advice we have collected over the years. Please stick to them, our life will be a lot easier!
DO NOTattempt to mirror or scrape our data wholesale. Please work with us if you have specific requirements.
TAKE CAREto remove PII from URLs or submit these scans asUnlisted, e.g. when there is an email address in the URL.
ATTENTIONCertain JSON properties in API responses might occasionally be missing, make sure you handle this gracefully.
Use your API-Key for all API requests(submit, search, retrieve), otherwise you're subject to quotas for unauthenticated users.
Any API endpoint not documented on this page is not guaranteed to be stable or even be available in the future.
Make sure to follow HTTP redirects (HTTP 301 and HTTP 302) sent by urlscan.io.
Use exponential backoffs and limit concurrency for all types of requests. Respect HTTP 429 response codes!
Existing scans can be deleted at any time, even right after they were found in the search API. Make sure to handle this case.
Use a work queue with backoffs and retries for API actions such as scans, results, and DOM / response downloads.
Build a way to deduplicate searches and URL submissions on your end.
Consider using out-of-band mechanisms to determine that the URL you want to submit will actually deliver content.
Consider searching for a domain / URL before submitting it again.
Search: Combine search-terms into one query and limit it by date if possible, e.g. if you query on an interval.
Integrations: Use a custom HTTP user-agent string for your library/integration. Include a software version if applicable.
Integrations: Expose HTTP status codes and error messages to your users.
Integrations: Expect keys to be added to any JSON response object at any point in time, handle gracefully.
Some actionson urlscan.io are subject to quotas and rate-limits, regardless of performed in the UI or via the API.
There are separate limits per minute, per hour and per day for each action. Check your personal quotas for details.
Only successful requests count against your quota, i.e. requests which return an HTTP 200 status code.
We use a fixed-window approach to rate-limit requests, with resets at the full minute, hour and midnight UTC.
If you exceed a rate-limit for an action, the API will respond with a HTTP 429 error code for additional requests against that action.
You can query your current limits and used quota like this:
curl -H "Content-Type: application/json" -H "API-Key: $apikey" "https://urlscan.io/user/quotas/"
The API returns X-Rate-Limit HTTP headers on each request to a rate-limited resource. The values only apply to the action of that API request, i.e. if you exceeded your quota for private scans you might still have available quota to submit unlisted scans or perform a search request. The limit returned is always the next one to be exceed in absolute numbers, so if your per-hour quota still has 1000 requests remaining but your per-day quota only has 500 requests left, you will receive the per-day quota. Make sure to respect the rate-limit headers as returned by every request.
X-Rate-Limit-Scope: ip-address
X-Rate-Limit-Action: search
X-Rate-Limit-Window: minute
X-Rate-Limit-Limit: 30
X-Rate-Limit-Remaining: 24
X-Rate-Limit-Reset: 2020-05-18T20:19:00.000Z
X-Rate-Limit-Reset-After: 17
X-Rate-Limit-Scope Either user (with cookie or API-Key header) or ip-address for unauthenticated requests.
X-Rate-Limit-Action Which API actions the rate-limit refers to, e.g. search or public.
X-Rate-Limit-Window Rate window with the least fewest remaining calls, either minute,hour, or day.
X-Rate-Limit-Limit Your rate-limit for this action and window.
X-Rate-Limit-Remaining Remaining calls for this action and window (not counting the current request).
X-Rate-Limit-Reset ISO-8601 timestamp of when the rate-limit resets.
X-Rate-Limit-Reset-After Seconds remaining until the rate-limit resets.
The submission API allows you to submit a URL to be scanned and set some options for the scan.
curl -X POST "https://urlscan.io/api/v1/scan/" \
-H "Content-Type: application/json" \
-H "API-Key: $apikey" \
-d "{ \
\"url\": \"$url\", \"visibility\": \"public\", \
\"tags\": [\"demotag1\", \"demotag2\"] \
}"
import requests
import json
headers = {'API-Key':'$apikey','Content-Type':'application/json'}
data = {"url": "https://urlyouwanttoscan.com/path/", "visibility": "public"}
response = requests.post('https://urlscan.io/api/v1/scan/',headers=headers, data=json.dumps(data))
print(response)
print(response.json())
{
"url": "https://urlscan.io/api/v1/scan/",
"content_type": "json",
"method": "post",
"payload": {
"url": "https://tines.io/",
"visibility": "public",
"tags":[
"demotag1", "demotag2"
]
},
"headers": {
"API-Key": "{% credential urlscan_io %}"
},
"expected_update_period_in_days": "1"
}
The response to the API call will give you the scan ID and API endpoint for the scan, you can use it to retrieve the result after waiting for a short while. Until the scan is finished, the URL will respond with a HTTP 404 status code.
Other options that can be set in the POST data JSON object:
customagent: Override User-Agent for this scan
referer: Override HTTP referer for this scan
visibility: One ofpublic,unlisted,private. Defaults to your configured default visibility.
tags: User-defined tags to annotate this scan, e.g. "phishing" or "malicious". Limited to 10 tags.
overrideSafety: If set to any value, this will disable reclassification of URLs with potential PII in them. Use with care!
If you have a list of URLs, you can use the following code-snippet to submit all of them:
echo list|tr -d "\r"|while read url; do
curl -X POST "https://urlscan.io/api/v1/scan/" \
-H "Content-Type: application/json" \
-H "API-Key: $apikey" \
-d "{\"url\": \"$url\", \"visibility\": \"public\"}"
sleep 2;
done
Using the Scan ID received from the Submission API, you can use the Result API to poll for the scan. The most efficient approach would be to wait at least 10 seconds before starting to poll, and then only polling 2-second intervals with an eventual upper timeout in case the scan does not return.
curl https://urlscan.io/api/v1/result/$uuid/
{
"url": "https://urlscan.io/api/v1/result/{{.uuid}}/",
"content_type": "json",
"method": "get",
"expected_update_period_in_days": "1"
}
Once the scan is in our database, the URL will return a JSON object with these top-level properties:
task
Information about the submission: Time, method, options, links to screenshot/DOM
page
High-level information about the page: Geolocation, IP, PTR
lists
Lists of domains, IPs, URLs, ASNs, servers, hashes
data
All of the requests/responses, links, cookies, messages
meta
Processor output: ASN, GeoIP, AdBlock, Google Safe Browsing
stats
Computed stats (by type, protocol, IP, etc.)
verdicts
Verdicts about malicious content, with subkeys urlscan,engines,community.
Some of the information is contained in duplicate form for convenience.
In a similar fashion, you can get the DOM and screenshot for a scan using these URLs:
curl https://urlscan.io/screenshots/$uuid.png
curl https://urlscan.io/dom/$uuid/
You can use the same ElasticSearch syntax to search for scans as on the Search page. Each result has high-level metadata about the scan result and a link to the API for the full scan result.
curl "https://urlscan.io/api/v1/search/?q=domain:urlscan.io"
{
"url": "https://urlscan.io/api/v1/search/",
"content_type": "json",
"method": "get",
"payload": {
"q": "domain:tines.io OR domain:urlscan.io"
},
"expected_update_period_in_days": "1"
}
q
The query term (ElasticSearch Query String Query). Default: "*"
size
Number of results returned. Default: 100, Max: 10000 (depending on your subscription)
search_after
For iterating, value of the sort attribute of the last result you received (comma-separated).
offset
Deprecated, not supported anymore, use search_after.
The search API returns an array of results where each entry includes these items:
_id
The UUID of the scan
sort
The sort key, to be used with search_after
page
Information about the page after it finished loading
task
Parameters for the scan
stats
High-level stats about the page
brand
Pro OnlyDetected phishing against specific brands
The API search will only indicate an exact count of results up to 10,000 results in the total property. After that the has_more flag will be true. Use the search_after query parameter for iterating over further results.
API search will find public scans performed by anyone as well as unlisted and private scans performed by your or your teams.
Query String Help
All API actions (including Search) are subject to your individual API Quotas.
The query field uses the ElasticSearch Query String to search for results. All queries are run in filter mode, sorted by date descending.
Refer to the documentation for advanced queries such as wildcard, regex, boolean operators, fuzzy searches, etc.
You can group and concatenate search-terms with brackets ( ),AND,OR, andNOT. The default operator isAND.
Alwaysuse the field names of the fields you want to search. Wildcards for the field-name arenotsupported!
Alwaysescape reserved characters with backslash:+ - = && || > < ! ( ) { } [ ] ^ " ~ * ? : \ /
Alwayslimit the time-range if possible usingdate, e.g.date:>now-7dordate:>now-1y.
You can use wildcard (though no leading wildcard) and regex search on almost all fields. Regexes are always anchored to beginning/end of the tokens.
The dateallows relative queries likedate:>now-7dor range-queries likedate:[2020-01-01 TO 2020-02-01]or both combined.
Domain fields contain the whole domain and each smaller domain component domaincan be searched bygoogle.comwhich will includewww.google.com
The page.urlfield is analysed as text, if you want to find multiple path components you should use phrase search withpage.url:"foo/bar/batz"
The userandteamfield are special, you can search foruser:meorteam:meto get your own scans.
Searchable fields: ip,domain,page.url,hash,asn,asnname,country,server,filename,task.visibility,task.method
The fields ip,domain,url,asn,asnname,countryandservercontain all requests of the scan.
To just search for the primary IP/Domain/ASN, prefix it with page., e.g.page.domain:paypal.com.
The API returns different error codes using the HTTP status and will also include some high-level information in the JSON response, including the status code, a message and sometimes a more elaborate description.
For scan submissions, there are various reasons why a scan request won't be accepted and will return an error code. This includes, among others:
Blacklisted domains and URLs, requested to be blacklisted by their respective owners.
Spammy submissionsof URLs known to be used only for spamming this service.
Invalid hostnamesor invalid protocol schemes (FTP etc).
Missing URL property... yes, it does happen.
Contains HTTP basic auth information... yes, that happens as well.
Non-resolvable hostnames (A, AAAA, CNAME)which we will not even try to scan.
An error will typically be indicated by the HTTP 400 status code. It might look like this:
{
"message": "DNS Error - Could not resolve domain",
"description": "The domain .google.com could not be resolved to a valid IPv4/IPv6 address. We won't try to load it in the browser.",
"status": 400
}
If you think an error is incorrect, let us know via mail!
A few companies and individuals have integrated urlscan.io into their tools and workflows.
If you'd like to see your product listed here, send us an email!
Commercial
Tines - Advanced security orchestration & automation platform
Demisto Enterprise - Incident Lifecycle Platform
Phantom - Security Automation & Orchestration Platform
Anomali - A Threat Intelligence Platform that enables businesses to integrate security products and leverage threat data
Exabeam - Smarter SIEM, Better Security
Siemplify - Security Orchestration, Automation and Incident Response
Swimlane - Security Orchestration, Automation and Response
IBM Resilient - IBM Resilient Incident Response Platform
Rapid7 Komand - An orchestration layer for security tools
Rapid7 InsightConnect - Orchestration and automation to accelerate your teams and tools
LogicHub - Intelligent Security Automation
ThreatConnect - Threat Intelligence, Analytics, and Orchestration Platform
FireEye Security Orchestrator - Simplify threat response through orchestration and automation
RSA NetWitness - Threat detection & response
Cisco SecureX Threat Response - Security that works together
Cybersponse - Security Orchestration, Automation and Incident Response Solution
Polarity - Augmented Reality for Your Desktop - Integration
Nevelex Labs - Security Flow is a new automation and orchestration tool for corporate security.
Sanguine eComscan - eComscan is smart CCTV for online stores
D3 SOAR - Security Orchestration and Automated Incident Response with MITRE ATT&CK
DTonomy AIR - SOAR with Adaptive Intelligence
Joe Sandbox Cloud - Automated Deep Malware Analysis in the Cloud for Malware
Hybrid Analysis - Free malware analysis service for the community that detects and analyzes unknown threats
Open Source
Intel Owl - analyze files, domains, IPs in multiple ways from a single API at scale
The Hive Cortex Analyzer
Amass - In-depth Attack Surface Mapping and Asset Discovery
DataSploit - An #OSINT Framework for Recon
urlScan2Hive - Creates case in The Hive, by Wayland Morgan
urlscan-py - Simple submission tool by Spencer Heywood
urlscan-py Docker - Dockerized version of urlscan-py
urlscan.io-R - Submission tool written in R by ekamioka
urlscan - urlscan.io library in R by Bob Rudis
Ruby API Client - By ninoseki
Miteru - An experimental phishing kit detection tool, by ninoseki
mitaka - urlscan.io friendly Chrome Extension, by ninoseki
PSURLScanio - Powershell module for interacting with the urlscan.io API, by sysgoblin
PowerShell snippet - for submitting to urlscan.io, by Nicholas Gipson
urlscanio - Python CLI tool for submitting to urlscan.io, by Arthur Verkaik
StalkPhish - The Phishing kits stalker, harvesting phishing kits for investigations, by tAd
Tools by Ecstatic Nobel on GitHub:
Sooty - The SOC Analysts all-in-one CLI tool to automate and speed up workflow.
Gotanda - Gotanda is Firefox Web Extension for OSINT.
Phishunt.io - Hunting phishings
Disclaimer: All trademarks belong to their respective owners.
Manual submissions are the submissions through our website. No registration is required. Manual submissions have the same features as API and automatic submissions.
Automatic Submissions are URLs we collect from a variety of sources and submit to urlscan.io internally. The reason behind this to provide a good coverage of well-known URLs, especially with a focus towards potentially malicious sites. This helps when scanning a new site and searching for one of the many features (domain, IP, ASN) that can be extracted. Automatic Sources
|
```(aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/
Save optimization results for plotting
Dmitry Hitslast edited by
I am not an expert in python so the question maybe naive.
Following the example I am able to print out the optimization results.
def stop(self):
self.log('(SAR AF MAX %.3f) (SAR AF %.3f) Ending Value %.2f' %
(self.p.sar_afmax, self.p.sar_af, self.broker.getvalue()))
....
2007-12-31, (SAR AF MAX 0.190) (SAR AF 0.004) Ending Value 1043106.09
2007-12-31, (SAR AF MAX 0.190) (SAR AF 0.005) Ending Value 1043106.09
2007-12-31, (SAR AF MAX 0.190) (SAR AF 0.006) Ending Value 1201288.27
2007-12-31, (SAR AF MAX 0.190) (SAR AF 0.007) Ending Value 997203.68
....
Is there an easy way to export those results for plotting, for example?
I have tried to look in what is returned in:
# Run over everything
results = cerebro.run(maxcpus=1)
In [33]: results[2][0].p.sar_af Out[33]: 0.01 In [34]: results[2][0].p.sar_afmax Out[34]: 0.03
but did not find how to extract the ending value of the broker for each set of parameters.
Is there an easy way to export those results for plotting, for example?
Not really. Plotting was meant to plot single-run results. You could always pickle the results, but that won't help with plotting
them.
but did not find how to extract the ending value of the broker for each set of parameters.
Save the result during the
stopmethod of the strategy.
|
指定したフォルダのすべてのメタデータを取得します。これは、IDが0のルートフォルダでは使用できません。
12345
フォルダを表す一意の識別子。
フォルダIDを確認するには、ウェブアプリケーションでこのフォルダにアクセスして、URLからIDをコピーします。たとえば、URLがhttps://*.app.box.com/folder/123の場合、folder_idは123です。
Boxアカウントのルートフォルダは常にID 0で表されます。
curl -i -X GET "https://api.box.com/2.0/folders/4353455/metadata" \
-H "Authorization: Bearer <ACCESS_TOKEN>"
BoxMetadataTemplateCollection<Dictionary<string, object>> metadataInstances = await client.MetadataManager
.GetAllFolderMetadataTemplatesAsync(folderId: "11111");
BoxFolder file = new BoxFolder(api, "id");
Iterable<Metadata> metadataList = folder.getAllMetadata();
for (Metadata metadata : metadataList) {
// Do something with the metadata.
}
folder_metadata = client.folder(folder_id='22222').get_all_metadata()
for instance in folder_metadata:
if 'foo' in instance:
print('Metadata instance {0} has value "{1}" for foo'.format(instance['id'], instance['foo']))
client.folders.getAllMetadata('11111')
.then(metadata => {
/* metadata -> {
entries:
[ { currentDocumentStage: 'Init',
'$type': 'documentFlow-452b4c9d-c3ad-4ac7-b1ad-9d5192f2fc5f',
'$parent': 'folder_11111',
'$id': '50ba0dba-0f89-4395-b867-3e057c1f6ed9',
'$version': 4,
'$typeVersion': 2,
needsApprovalFrom: 'Smith',
'$template': 'documentFlow',
'$scope': 'enterprise_12345' },
{ '$type': 'productInfo-9d7b6993-b09e-4e52-b197-e42f0ea995b9',
'$parent': 'folder_11111',
'$id': '15d1014a-06c2-47ad-9916-014eab456194',
'$version': 2,
'$typeVersion': 1,
skuNumber: 45334223,
description: 'Watch',
'$template': 'productInfo',
'$scope': 'enterprise_12345' },
{ Popularity: '25',
'$type': 'properties',
'$parent': 'folder_11111',
'$id': 'b6f36cbc-fc7a-4eda-8889-130f350cc057',
'$version': 0,
'$typeVersion': 2,
'$template': 'properties',
'$scope': 'global' } ],
limit: 100 }
*/
});
client.metadata.list(forFolderId: "22222") { (result: Result<[MetadataObject], BoxSDKError>) in
guard case let .success(metadata) = result {
print("Error retrieving metadata")
return
}
print("Retrieved \(metadata.count) metadata instances:")
for instance in metadata {
print("- \(instance.template)")
}
}
{
"entries": [
{
"$parent": "folder_59449484661,",
"$template": "marketingCollateral",
"$scope": "enterprise_27335",
"$version": 1
}
],
"limit": 100
}
|
Scout Spider for Finding Fresh Proxy Websites
Jun 122019
With so many proxy website URLs all over the place, it's difficult to tell which one's actually have new proxies posted or if you're just receiving the same old proxies that are cluttering up your list and wasting time on testing. So, I wrote a spider that will scrape proxies off of URLs and compare the first 15 results to see how different the results are. Easy peasy.
I omitted the spider settings, Request func, and the callback func to keep it compact:
from scrapy import Spider
from scrapy import signals
from scrapy.xlib.pydispatch import dispatcher
from difflib import SequenceMatcher
import threading
import re
import csv
IPPortPatternGlobal = re.compile(
r'(?P<ip>(?:(?:25[0-5]|2[0-4]\d|[01]?\d\d?)\.){3}(?:25[0-5]|2[0-4]\d|[01]?\d\d?))' # noqa
r'(?=.*?(?:(?:(?:(?:25[0-5]|2[0-4]\d|[01]?\d\d?)\.){3}(?:25[0-5]|2[0-4]\d|[01]?\d\d?))|(?P<port>\d{2,5})))', # noqa
flags=re.DOTALL,
)
file_name = 'scout_results'
lock = threading.Lock()
threads = []
pdata = {}
with open(f"./data/{file_name}.csv") as file:
try:
results = csv.DictReader(file, delimiter=',')
next(results)
for row in results:
try:
if int(row["count"]) > 0:
pdata[row['url']] = {'first_15': row['first_15'], 'count': row['count']}
except Exception as e:
print(f'Error: {e}')
except:
pass
class SingleSpider(Spider):
def __init__(self):
dispatcher.connect(self.spider_closed, signals.spider_closed)
global file_name
self.new_pdata = open(f"./data/{file_name}.csv", "w+")
self.new_pdata.write('url,first_15,count,ip_diff,c_diff\n')
def thread_compare(self, data):
with lock:
global pdata
url = data[0].strip()
f_15 = str(data[1]).strip()
count = str(data[2]).strip()
try:
ip_diff = str(self.compare(f_15, pdata[url]['first_15']))
count_diff = str(abs(int(count) - int(pdata[url]['count'])))
print(f'{url} - ip: {ip_diff} count: {count_diff}')
except Exception as e:
ip_diff = 'empty'
count_diff = 'empty'
print(f'Nothing to compare: {e}')
self.new_pdata.write(f'{url},{f_15},{count},{ip_diff},{count_diff}\n')
@staticmethod
def compare(block1, block2):
s = SequenceMatcher(lambda x: x in "\n", block1, block2)
return s.quick_ratio()
def spider_closed(self, spider):
self.new_pdata.close()
|
In a recent project, we were tasked with indexing the file paths and filenames of every file in every drive connected to macOS. And we needed to do it quickly. In this example, we'll show you how to rapidly index the file system and create a dictionary with the results. The keys in the dictionary will be the file paths as they will always be unique. The values will be the file base names themselves.
So let's get started. And as always, we're using Python 3.
First, let's import the following:
import os
from threading import Thread
from datetime import datetime
Next, let's create a dictionary that will hold all of our results.
dict1 = {}
Then we'll create the get_locations function. Its purpose is to create a list of drives and the top-level folders in each drive. We'll use this list to create a thread for each folder. But we may encounter actual files at the roots of these drives. Now, of course, we don't want to create a thread for a single file. We'll just add the files to our dictionary.
def get_locations():
locs = os.scandir('/Volumes')
rtn = []
for i in locs:
for entry in os.scandir(i.path):
if entry.is_dir(follow_symlinks=False):
rtn.append(entry.path)
elif entry.is_file(follow_symlinks=False):
dict1[entry.path] = entry.name
return rtn
Now let's create the function that will be tasked with walking each of the folders. It receives the path as a param. As it recursively walks the folder, we'll add the paths of the files it finds as keys in the dictionary and the filenames as values.
def walker(location):
for root, dir, files in os.walk(location, topdown = True):
for file in files:
dict1[root+"/"+file] = file
Now let's create the function that will manage our threads. It first gets the target locations from the get_locations function. Then for each location we assign a process and get started with our walker. And then we terminate each process.
def create():
processes = [] # empty process list is created
targetLocations = get_locations()
for location in targetLocations:
process1 = Thread(target=walker, args=(location,))
process1.start()
processes.append(process1)
for t in processes:
t.join() # Terminate the threads
Let's test it out.
t1= datetime.now()
create()
t2= datetime.now()
total =t2-t1
print("Time taken to index " , total)
#In my case with about 2TB of disk we get
#Time taken to index 0:01:29.111082
Now we have a dictionary that we could create a DataFrame with and do some further filtering and analysis. Or perhaps we could send it to Elasticsearch, a database, or store the dictionary in a pickle file to rapidly reload in the future. In this example let's simply create a DataFrame and filter using a regex to get all the .jpg files.
import pandas as pd
df = pd.DataFrame(list(dict1.items()), columns=['Path','File'])
imagesDataFrame = df["File"].str.contains('\.jpg$', case=False, regex=True)
You get the idea. So go out and try it for yourself. You can download the Gist here: https://gist.github.com/CoffieldWeb/57dc5b4dcc01d175335b43de1a16db96
|
Inside my contract I want to use importkeyword to load other contracts and libraries to make my code more clean. As an example, I carried DateTime.sol contract to another file and import it inside myContract file without any problem.
myContract.sol:
import "/Users/avatar/populus/contracts/DateTime.sol"; //works.
contract myContract{
DateTime date = new DateTime(); //does not work.
<Some_Code>
}
But from myContract contract, when I try to call the DateTime contract as: DateTime date = new DateTime() I have faced with the following error:
/usr/local/lib/python2.7/site-packages/populus/chain.py:580: in get_contract
if contract_name not in self.contract_factories:
/usr/local/lib/python2.7/site-packages/populus/utils/functional.py:50: in __get__
res = instance.__dict__[self.name] = self.func(instance)
/usr/local/lib/python2.7/site-packages/populus/chain.py:198: in contract_factories
compiled_contracts = self.project.compiled_contracts
/usr/local/lib/python2.7/site-packages/populus/project.py:153: in compiled_contracts
optimize=True,
/usr/local/lib/python2.7/site-packages/populus/compilation.py:42: in compile_project_contracts
compiled_sources = compile_files(contract_source_paths, **compiler_kwargs)
/usr/local/lib/python2.7/site-packages/solc/main.py:129: in compile_files
**kwargs
/usr/local/lib/python2.7/site-packages/solc/utils/string.py:91: in inner
return force_obj_to_text(fn(*args, **kwargs))
E Dynamic exception type: boost::exception_detail::clone_impl<dev::solidity::InternalCompilerError>
E std::exception::what: std::exception
E [dev::tag_comment*] = Compiled contract not found.
But when the DateTime contract was embedded inside myContract.sol file, I was able to call DateTime date = new DateTime(); from myContract contract without having any problem. I have used following approach: How to call a Contract from an existing contract approach.
myContract.sol:
contract myContract{
DateTime date = new DateTime(); //works now.
<Some_Code>
}
contract DateTime{
<Some_Code>
}
[Q] I have tried this approach on Solidity Browser and it works. How could I fix this problem on Populus?
Some information:
platform darwin -- Python 2.7.12, pytest-3.0.2, py-1.4.31,
pluggy-0.3.1
plugins: populus-1.1.0
OS: Mac OS X
solc --version: the solidity compiler commandline interface Version: 0.4.8+commit.60cc1668.Darwin.appleclang
Thank you for your time and help.
|
Recently I have had the pleasure of migrating a WordPress website which resulted in a peculiar problem - sending email functionality on the new server no longer worked. After some digging around I found out that PHP has this mail function which uses the sendmail program to actually send your email.
Well after messing around with real sendmail for a good while and still not really understanding how to configure it properly, I decided to write my own sendmail.py script that uses my gmail and its app password to send out an email to whoever PHP/Wordpress wants to send an email to on my behalf.
After script was done I had to tell php to use it via sendmail_path = path to sendmail.py line inside php.ini which was located /etc/php5/apache2/php.ini on my Debian server. Then I just restarted apache server and voila, sending email worked!
Here is sendmail.py in all of its hacky glory:
#!/usr/bin/python
#this is replacement for sendmail that php can use to send its goddamn emails
import smtplib
import sys
def findToAddress(lines):
for i, val in enumerate(lines):
j = val.index("To: ")
if j != -1:
return val[j+4:]
return ""
fromaddr = 'whatever@example.com'
lines = sys.stdin.readlines()
toaddrs = findToAddress(lines)
msg = ''.join(lines)
username = 'you@gmail.com'
password = 'your app password'
# The actual mail send
server = smtplib.SMTP('smtp.gmail.com:25')
server.starttls()
server.login(username,password)
server.sendmail(fromaddr,toaddrs, msg)
server.quit()
|
Bounty: 200
Bounty: 200
Protected Overrides Function getJsonPrivate(method As String, otherParameters() As Tuple(Of String, String)) As String
Dim base = "https://www.coinmex.com"
Dim premethod = "/api/v1/spot/ccex/"
Dim longmethod = premethod + method
Dim timestampstring = getEstimatedTimeStamp().ToString
Dim stringtosign = timestampstring + "GET" + longmethod + "{}" '1553784499976GET/api/v1/spot/ccex/account/assets{}
Dim hasher = New System.Security.Cryptography.HMACSHA256(System.Text.Encoding.UTF8.GetBytes(_secret1))
Dim sighashbyte = hasher.ComputeHash(System.Text.Encoding.UTF8.GetBytes(stringtosign))
Dim signature = System.Convert.ToBase64String(sighashbyte) '"FIgrJFDOQctqnkOTyuv6+uTy6xw3OZiP4waC1u6P5LU="=
Dim url = base + longmethod 'https://www.coinmex.com/api/v1/spot/ccex/account/assets
'_apiKey1="cmx-1027e54e4723b09810576f8e7a5413**"
'_passphrase1= 1Us6&f%*K@Qsqr**
'
Dim response = CookieAwareWebClient.downloadString1(url, "", {Tuple.Create("ACCESS-KEY", _apiKey1), Tuple.Create("ACCESS-SIGN", signature), Tuple.Create("ACCESS-TIMESTAMP", timestampstring), Tuple.Create("ACCESS-PASSPHRASE", _passphrase1)})
Return response
End Function
Public Overrides Sub readbalances()
typicalReadBalances("account/assets", "data", "currencyCode", "available", "frozen", "", {})
End Sub
I think I did it like what’s listed here
https://github.com/coinmex/coinmex-official-api-docs/blob/master/README_EN.md#1-access-account-information
# Request
GET /api/v1/spot/ccex/account/assets
# Response
[
{
"available":"0.1",
"balance":"0.1",
"currencyCode":"ETH",
"frozen":"0",
"id":1
},
{
"available":"1",
"balance":"1",
"currencyCode":"USDT",
"frozen":"0",
"id":1
}
]
And for Signature
This is the manual says
The ACCESS-SIGN header is the output generated by using HMAC SHA256 to
create the HMAC SHA256 using the BASE64 decoding secret key in the
prehash string to generate timestamp + method + requestPath + “?” +
queryString + body (where ‘+’ represents the string concatenation) and
BASE64 encoded output. The timestamp value is the same as the
ACCESS-TIMESTAMP header. This body is the request body string or
omitted if there is no request body (usually the GET request). This
method should be capitalized.
Remember that before using it as the key to HMAC, base64 decoding (the
result is 64 bytes) is first performed on the 64-bit alphanumeric
password string. In addition, the digest output is base64 encoded
before sending the header.
User submitted parameters must be signed except for sign. First, the
string to be signed is ordered according to the parameter name (first
compare the first letter of all parameter names, in alphabetic order,
if you encounter the same first letter, then you move to the second
letter, and so on).
For example, if we sign the following parameters
curl "https://www.coinmex.com/api/v1/spot/ccex/orders?limit=100"
Timestamp = 1590000000.281
Method = "POST"
requestPath = "/api/v1/spot/ccex/orders"
queryString= "?limit=100"
body = {
'code': 'ct_usdt',
'side': 'buy',
'type': 'limit',
'size': '1',
'price': '1',
'funds': '',
}
Generate the string to be signed
Message = '1590000000.281GET/api/v1/spot/ccex/orders?limit=100{"code": "ct_usdt", "side": "buy", "type": "limit", "size": "1", "price": "0.1", "funds": ""}'
Then, the character to be signed is added with the private key
parameters to generate the final character string to be signed.
For example:
hmac = hmac(secretkey, Message, SHA256)
Signature = base64.encode(hmac.digest())
I thought may be the _secret1 is a base64 string rather than utf8 so I changed to
Dim base = "https://www.coinmex.com"
Dim premethod = "/api/v1/spot/ccex/"
Dim longmethod = premethod + method
Dim timestampstring = getEstimatedTimeStamp().ToString
'Dim stringtosign = timestampstring + "GET" + longmethod + "{}" '1553784499976GET/api/v1/spot/ccex/account/assets{} also doesn't work
Dim stringtosign = timestampstring + "GET" + longmethod '1553784499976GET/api/v1/spot/ccex/account/assets
Dim hasher = New System.Security.Cryptography.HMACSHA256(Convert.FromBase64String(_secret1)) 'secret looks like 43a90185f5b7ab25af045e9e64bac5dc745934f359f1806fcdd2a4af80ac2
Dim sighashbyte = hasher.ComputeHash(System.Text.Encoding.UTF8.GetBytes(stringtosign))
Dim signature = Convert.ToBase64String(sighashbyte) '"FIgrJFDOQctqnkOTyuv6+uTy6xw3OZiP4waC1u6P5LU="=
Dim url = base + longmethod 'https://www.coinmex.com/api/v1/spot/ccex/account/assets
'_apiKey1="cmx-1027e54e4723b09810576f8e7a5413**"
'_passphrase1= 1Us6&f%*K@Qsq***
'
Dim response = CookieAwareWebClient.downloadString1(url, "", {Tuple.Create("ACCESS-KEY", _apiKey1), Tuple.Create("ACCESS-SIGN", signature), Tuple.Create("ACCESS-TIMESTAMP", timestampstring), Tuple.Create("ACCESS-PASSPHRASE", _passphrase1)})
Return response
Not working either.
The secret key (I truncated a few letters) look like
43a90185f5b7ab25af045e9e64bac5dc745934f359f1806fcdd2a4af80ac2
Is this something that should be decoded as base 64 or utf8 or what?
The spec says it’s 64. However, it doesn’t look like a 64 encoded string. It looks like the letters are from 0-f
Best answers will:
1. Tell me what went wrong in the code. I made the change. Try. Run. Works. Awesome.
A good answer will
2. A sample simulation with a fake/real signatures/nonce/passphrase and real actual headers and signatures. So I can see where exactly I have a wrong result.
|
Dans le précédent article, nous avons vu comment programmer un feu tricolore. Et si nous compliquions un peu les choses. Vous connaissez les feux de chantier ?
Ce sont des feux mobiles qui sont mis en place en cas de circulation alternée, à cause de travaux. Il sont synchronisés et communiquent par radio. Je vous propose donc d’utiliser 2 Micro:Bit synchronisés en Bluetooth pour simuler des feux de chantier.
Le branchement est exactement le même, mais avec 2 Micro:Bit.
Le code des 2 Micro:Bit sera différent car il faut forcement que l’un des 2 gère la séquence d’allumage (Vert, Orange, Rouge) et envoie les instructions à l’autre feu pour qu’il puisse afficher la séquence correspondante. Je vous conseille donc (si possible) d’utiliser des Micro:Bit de couleurs différentes pour mieux les différencier. Voila la séquence d’allumage des feux.
Feu 1 vert, Feu 2 rouge (5 secondes)
Feu 1 orange, Feu 2 rouge (1 seconde)
Feu 1 rouge, Feu 2 vert (5 secondes)
Feu 1 rouge, Feu 2 orange (1seconde)
Micro:Bit n°1 (l’émetteur)
# Appel de la bibliothèque "microbit"
from microbit import *
import radio
# Fonctions
def Eteindre_tout():
pin0.write_digital(0)
pin1.write_digital(0)
pin2.write_digital(0)
def Feu_rouge():
Eteindre_tout()
pin0.write_digital(1)
def Feu_orange():
Eteindre_tout()
pin1.write_digital(1)
def Feu_vert():
Eteindre_tout()
pin2.write_digital(1)
# Programme
radio.config(group=1)
radio.on()
# Boucle infinie
while True:
# Feu 1 vert
Feu_vert()
# Feu 2 rouge
radio.send('0')
# Attends 5 secondes
sleep(5000)
# Feu 1 orange
Feu_orange()
# Attends 1 seconde
sleep(1000)
# Feu 1 rouge
Feu_rouge()
# Feu 2 vert
radio.send('2')
# Attends 5 secondes
sleep(5000)
# Feu 2 orange
radio.send('1')
sleep(1000)
Micro:Bit n°2 (le récepteur)
# Appel de la bibliothèque "microbit"
from microbit import *
import radio
# Fonctions
def Eteindre_tout():
pin0.write_digital(0)
pin1.write_digital(0)
pin2.write_digital(0)
def Feu_rouge():
Eteindre_tout()
pin0.write_digital(1)
def Feu_orange():
Eteindre_tout()
pin1.write_digital(1)
def Feu_vert():
Eteindre_tout()
pin2.write_digital(1)
# Programme
radio.config(group=1)
radio.on()
# Boucle infinie
while True:
# Réception des messages
message = radio.receive()
# interpretation du message
if message == '0':
# Feu 2 rouge
Feu_rouge()
elif message == '1':
# Feu 2 orange
Feu_orange()
elif message == '2':
# Feu 2 vert
Feu_vert()
Exercice bonus
Souvent, sur les vrais feux de chantier, il y a en plus un compte à rebours pour faire patienter l’automobiliste en lui indiquant dans combien de temps le feu passera au vert. Si vous souhaitez aller plus loin, vous pouvez utiliser la matrice de LED pour indiquer sur chaque Micro:Bit, le nombre de secondes avant qu’il passe au vert. Amusez-vous bien…
|
Following code causes crash of FreeCAD(0.18), when line with removeObject is executed - Therefore I have it commented:
(This is not the "real" code - I extracted it to show the problem.)
Code: Select all
objectAnn=None
import DraftSnap
class Ui_Dialog:
def start():
snapit(0)
def cb(point):
print("cb called by Snapper", point)
if point.__class__.__name__ == 'Vector':
print("Snapper clicked")
objectAnn.LabelText=["New Text"] #This works
# App.ActiveDocument.removeObject(objectAnn.Label) #THIS MAKES CRASH !!!!!!!!!!!!!!!!!!!!!!1
FreeCAD.ActiveDocument.recompute()
return(print("End cb"))
def snapit(i):
global objectAnn
objectAnn = App.ActiveDocument.addObject("App::AnnotationLabel","FCInfoToMouse")
objectAnn.LabelText=["Einfügepunkt klicken"]
point = FreeCADGui.Snapper.getPoint(callback = Ui_Dialog.cb) #snapit runs through and does not wait here till clicked
print("Line after Snapper",objectAnn.Label)
what = Ui_Dialog
what.start()
I tried it on two Intel-PC's : Win10 with Intel(R) UHD Graphics 620; Win7 with Nvidia GeForce GT 540M)Unhandled Base::Exception caught in GUIApplication::notify.
The error message is: Access violation
Unhandled Base::Exception caught in GUIApplication::notify.
The error message is: Access violation
and an AMD-PC(Win 7,AMD-RadeonR7), always same error.
Is there a solution?
|
Para saber sobre peças ameaçadas, tem que ser uma verificação que olhe, para cada peça, as diagonais todas em que ela está, e se tem alguma peça "atrás". Como você está levando em conta as damas, tem que olhar todas as casas para trás, na mesma diagonal - não basta colocar alguns "if" e olhar as casas adjacentes.
Então, a recomeendação aà é estruturar um pouco melhor a representação do tabuleiro, em vez de simplesmente "listas dentro de listas" - criar uma classe que permita você digitar pouco para saber o que há em cada casa - em Python, uma classe com o método __getitem__, por exemplo, permite o uso de coordenadas do tabuleiro direto nos colchetes. E aà você pode colocar vários métodos pequeninhos para verificar diagonais e posições, e compor o seu uso. Métodos para mover peças que verifiquem a legalidade de um movimento, e assim por diante.
Aproveitando o ensejo, também vale a pena criar uma classe simples para representar as peças de jogo, em vez de usar valores arbitrários como 1, 11, 2 e 22 - que podem permitir comparações "espertas" que só voltem "True", por exemplo, se a peça do outro lado for do oponente sem ter que duplicar toda a lógica das comparações.
Resumindo - não tem atalho - Quanto mais você conseguir fatorar o código em pequenas funções e abstrair coisas do tipo, usar posições relativas, mais legÃvel fica o código final.
Criei uma classe tabuleiro aqui - para facilitar inclui uma função que pode popular o tabuleiro a partir da lista de listas como você criou - Eu uso coordenadas a partir do canto inferior esquerdo, de 0 até 7 e 0 até 7, ou a notação para coordenadas "A1, B1, até H8", usada em tabuleiros de xadrez - então ao popular eu inverto a coordenada y.(também é muito bom poder visualizar o que está no tabuleiro - então inclui uma representação para as peças e casas pretas usando caracteres unicode - eu pesquisei por "circle" aqui: https://www.fileformat.info/info/unicode/char/search.htm?q=circle&preview=entity ) )
Quanto ao algoritmo em si para localizar as peças ameaçadas, é como descreveriamos a tarefa em português:
class PlayingPiece:
def __init__(self, type, team):
self.type = type
self.team = team
def __eq__(self, other):
return self.team == other.team
def __str__(self):
# â—¯ , âº, â‘¡, â·
return (
"\N{LARGE CIRCLE}" if self.type == "peça" and self.team == "vermelha" else
"\N{BLACK CIRCLE FOR RECORD}" if self.type == "peça" and self.team == "preta" else
"\N{CIRCLED DIGIT TWO}" if self.type == "dama" and self.team == "vermelha" else
"\N{DINGBAT NEGATIVE CIRCLED DIGIT TWO}"
)
def __repr__(self):
return f"{self.type} {self.team}"
teams = ["preta", "vermelha"]
types_ = ["peça", "dama"]
class Board:
def __init__(self, size=(8,8)):
self.size = size
self.data = [None,] * (size[0] * size[1])
self.player = teams[0]
def load_from_legacy_lists(self, lists):
for y, row in enumerate(lists):
y = self.size[0] - 1 - y
for x, p in enumerate(row):
piece = (None if p == 0 else
PlayingPiece("peça", "preta") if p == 1 else
PlayingPiece("peça", "vermelha") if p == 2 else
PlayingPiece("dama", "preta") if p == 11 else
PlayingPiece("dama", "vermelha")
)
if piece:
self[y, x] = piece
def filter_pos(self, pos):
if 0 <= pos[0] < self.size[0] and 0 <= pos[1] < self.size[1]:
return True
return False
def black_pos(self, pos):
return not (pos[0] + pos[1] % 2) % 2
def _norm_pos(self, pos):
if len(pos) != 2:
raise ValueError()
if isinstance(pos, str):
pos = (ord(pos[0].lower()) - ord("a"), int(pos[1]) - 1)
if not self.black_pos(pos):
raise ValueError("Coordenada não está nas casas pretas")
if not self.filter_pos(pos):
raise ValueError("Coordenada inválida")
return pos
def __getitem__(self, pos):
pos = self._norm_pos(pos)
return self.data[self.size[0] * pos[0] + pos[1]]
def __setitem__(self, pos, value):
pos = self._norm_pos(pos)
if value is not None and not isinstance(value, PlayingPiece):
raise TypeError("Apenas peças de jogo ou None são aceitos")
self.data[self.size[0] * pos[0] + pos[1]] = value
def iter_directions(self):
for y in -1, 1:
for x in -1, 1:
yield y, x
def _check_menace(self, pos, direction):
other_dir = -direction[0], -direction[1]
oposite_square = pos[0] + other_dir[0], pos[1] + other_dir[1]
if not self.filter_pos(oposite_square) or self[oposite_square]:
# não há casa oposta, ou está ocupada
return False
for i in range(1, 1 + self.size[0]):
square = pos[0] + i * direction[0], pos[1] + i * direction[1]
if not self.filter_pos(square):
# fora do tabuleiro
return False
item = self[square]
if item and item.team != self.player and (
i == 1 or item.type == "dama"
):
return True
def __iter__(self):
"""yield all valid board positions"""
for y in range(self.size[0]):
for x in range(self.size[1]):
try:
self._norm_pos((x,y))
except ValueError:
continue
yield (y, x)
def count_menace(self, pos):
pos = self._norm_pos(pos)
piece = self[pos]
last_direction = None
menace_count = sum(int(self._check_menace(pos, direction)) for direction in self.iter_directions())
return menace_count
def count_team_menace(self, team):
self.team = team
menaces = 0
for pos in self:
if self[pos] and self[pos].team == team:
menaces += self.count_menace(pos)
return menaces
def __repr__(self):
lines = []
for y in range(self.size[0] - 1, -1, -1):
line = ""
for x in range(self.size[1]):
if not self.black_pos((y, x)):
line += " "
continue
item = self[y, x]
line += ("\u2588" if not item else
str(item)
)
lines.append(line)
return "\n".join(lines)
Usando isso no modo interativo:
In [120]: aa = Board()
...: aa[0,0] = PlayingPiece("peça", "vermelha")
...: aa[1, 1] = PlayingPiece("peça", "preta")
...:
...:
In [121]: aa
Out[121]:
â–ˆ â–ˆ â–ˆ â–ˆ
â–ˆ â–ˆ â–ˆ â–ˆ
â–ˆ â–ˆ â–ˆ â–ˆ
â–ˆ â–ˆ â–ˆ â–ˆ
â–ˆ â–ˆ â–ˆ â–ˆ
â–ˆ â–ˆ â–ˆ â–ˆ
⺠█ █ █
â—¯ â–ˆ â–ˆ â–ˆ
In [122]: aa.count_team_menace("preta")
Out[122]: 1
|
Code: Select all
INSERT INTO `phpbb_lastrss_autopost` (`name`, `url`, `next_check`, `next_check_after`, `destination_id`, `enabled`) VALUES
('NAME', 'http://URL.TO/RSS.FEED', 0, 12, DESTINATION_FORUM_ID, 1);
Code: Select all
Feed wasn´t updated, because malformed
Hello its nice to see you are back Smix, i really love this modAboutIt´s error caused by bad URL ...
Code: Select all
Feed wasn´t updated, because malformed
Code: Select all
#-----[ SQL ]------------------------------------------
#
CREATE TABLE `phpbb_lastrss_autopost` (
`name` varchar(255) collate utf8_bin NOT NULL,
`url` varchar(255) collate utf8_bin NOT NULL,
`next_check` int(10) NOT NULL,
`next_check_after` int(2) NOT NULL,
`destination_id` int(3) NOT NULL,
`enabled` int(1) NOT NULL,
PRIMARY KEY (`name`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
INSERT INTO `phpbb_lastrss_autopost` (`name`, `url`, `next_check`, `next_check_after`, `destination_id`, `enabled`) VALUES
('lastRSS', 'http://phpbb3.smika.net/lastrss.php', 0, 1, 1, 1);
INSERT INTO `phpbb_config` (`config_name`, `config_value`, `is_dynamic`) VALUES
('lastrss_type', 'curl', 0),
('lastrss_ap_version', '0.1.0', 0),
('lastrss_ap_enabled', '1', 0),
('lastrss_ap_items_limit', '5', 0),
('lastrss_ap_bot_id', '2', 0);
Code: Select all
/*CREATE TABLE `phpbb_lastrss_autopost` (
`name` varchar(255) collate utf8_bin NOT NULL,
`url` varchar(255) collate utf8_bin NOT NULL,
`next_check` int(10) NOT NULL,
`next_check_after` int(2) NOT NULL,
`destination_id` int(3) NOT NULL,
`enabled` int(1) NOT NULL,
PRIMARY KEY ( `name` )
) ENGINE = MYISAM ;*/
Code: Select all
/*CREATE TABLE `phpbb_lastrss_autopost` (
`name` => wowinsider.com
`url` => http://www.wowinsider.com/rss.xml
`next_check` => 0
`next_check_after` => 1
`destination_id` => 29
`enabled` => 1
PRIMARY KEY ( `name` )
) ENGINE = MYISAM ;*/
Code: Select all
//$config['lastrss_ap_bot_id'] = 164;
If you wan´t to change (for testing purposes) the poster of the feed through the php file - not through the config table (the id is stored in table phpbb_config) ... uncomment the line (delete the "//") and set the user_id of account, which you want to post the feeds (at this time, only one account is posting all the topics ...)INSERT INTO `phpbb_lastrss_autopost` (`name`, `url`, `next_check`, `next_check_after`, `destination_id`, `enabled`) VALUES
('wowinsider.com', 'http://www.wowinsider.com/rss.xml', 0, 12, 29, 1);
Code: Select all
//$config['lastrss_ap_bot_id'] = 164;
Code: Select all
INSERT INTO `phpbb_lastrss_autopost` (`name`, `url`, `next_check`, `next_check_after`, `destination_id`, `enabled`) VALUES
('wowinsider.com', 'http://www.wowinsider.com/rss.xml', 0, 1, 29, 1);
Code: Select all
INSERT INTO `megalomania_config` (`config_name`, `config_value`, `is_dynamic`) VALUES
('lastrss_type', 'curl', 0),
('lastrss_ap_version', '0.1.0', 0),
('lastrss_ap_enabled', '1', 0),
('lastrss_ap_items_limit', '5', 0),
('lastrss_ap_bot_id', '91', 0);
Code: Select all
General Error
SQL ERROR [ mysql4 ]
Table 'megal5_forum.megalomania_lastrss_autopost' doesn't exist [1146]
SQL
SELECT * FROM megalomania_lastrss_autopost WHERE next_check < "1221815548" AND enabled = "1"
BACKTRACE
FILE: includes/db/mysql.php
LINE: 158
CALL: dbal_mysql->sql_error()
FILE: includes/functions_lastrss_autopost.php
LINE: 233
CALL: dbal_mysql->sql_query()
FILE: index.php
LINE: 28
CALL: include('includes/functions_lastrss_autopost.php')
Smix wrote:Please, remember this mod is still in development and installation is recommended only for experienced php & phpBB programers ...
|
前回、scikit-learnのトピックモデル(LDA)における評価指標として、Perplexityを算出する方法を紹介しました。
参考: トピックモデルの評価指標Perplexityの実験
今回はgensim版です。
gensimのLDAモデルには log_perplexity と言うメソッドがあるので、それを使うだけです、って話であれば前の記事とまとめてしまってよかったのですが、
話はそう単純では無いので記事を分けました。
さて、 log_perplexity ってメソッドですが、いかにも perplexity の自然対数を返してくれそうなメソッドです。
perplexity が欲しかったら、 $\exp(log\_perplexity)$ を計算すれば良さそうに見えます。
しかし、 log_perplexity は perplexity の自然対数では無いと言う事実を確認できるのが次の実験です。
前回の記事と同じく、4つのトピックにそれぞれ5単語を含む架空の言語で実験します。
import numpy as np
from gensim.corpora.dictionary import Dictionary
from gensim.models import LdaModel
word_list = [
["white", "black", "red", "green", "blue"],
["dog", "cat", "fish", "bird", "rabbit"],
["apple", "banana", "lemon", "orange", "melon"],
["Japan", "America", "China", "England", "France"],
]
corpus_list = [
np.random.choice(word_list[topic], 100)
for topic in range(len(word_list)) for i in range(100)
]
# 単語と単語IDを対応させる辞書の作成
dictionary = Dictionary(corpus_list)
# LdaModelが読み込めるBoW形式に変換
corpus = [dictionary.doc2bow(text) for text in corpus_list]
# トピック数4を指定して学習
lda = LdaModel(corpus, num_topics=4, id2word=dictionary)
# log_perplexity を出力
print(lda.log_perplexity(corpus))
# -2.173078593289852
出力が $-2.17\dots$です。
正常に学習できていれば、Perplexityは約5になるはずなので、$\log(5)=1.609\dots$が期待されるのに、符号から違います。
ドキュメントをよく読んでみましょう。
log_perplexity
Calculate and return per-word likelihood bound, using a chunk of documents as evaluation corpus.
Also output the calculated statistics, including the perplexity=2^(-bound), to log at INFO level.
これによると、perplexityは$2^{-bound}$だと言うことになっていて、どうやら、
log_perplexity()が返しているのは、boundに相当するようです。
計算してみましょう。
print(2**(-lda.log_perplexity(corpus)))
# 4.509847333880428
正解は5なので、それらしい結果が出ています。
ですがしかし、Perplexityとしてはこの値は良すぎます。
今回のダミーデータで学習している限りは5単語未満に絞り込めるはずがないのです。
実際、モデルが学習した結果を見てみましょう。
print(lda.show_topics(num_words=6))
"""
[
(0, '0.100*"bird" + 0.098*"dog" + 0.092*"melon" + 0.092*"cat" + 0.089*"orange" + 0.089*"rabbit"'),
(1, '0.104*"red" + 0.104*"green" + 0.102*"white" + 0.098*"blue" + 0.092*"black" + 0.084*"fish"'),
(2, '0.136*"lemon" + 0.134*"apple" + 0.128*"banana" + 0.117*"orange" + 0.116*"melon" + 0.045*"China"'),
(3, '0.216*"France" + 0.191*"America" + 0.181*"Japan" + 0.172*"England" + 0.163*"China" + 0.011*"apple"')
]
"""
本来は各トピック上位の5単語を0.2ずつの出現確率と予測できていないといけないので、今 Perplexity を計算しているモデルは
そんなに精度が良くないのです。(ハイパーパラメーターのチューニングを何もしてないので。それはそれで問題ですが、今回の議題からは外します。)
おかしいので、ソースコードを眺めてみたのですが、 2を底とする対数を取ってる様子は無く、普通に自然対数が使われていました。
なので、これはドキュメントの誤りとみた方が良さそうです。(将来的に修正されると思います。)
perplexity=e^(-bound)
と考えると、辻褄があいます。
print(np.exp(-lda.log_perplexity(corpus)))
# 8.785288789149925
トピック数を 1〜4 と動かして算出してみると明らかです。
トピック数が1の時は全く絞り込めていないので元の単語数の約20,
2の時は半分に絞れるので約10,
4の時は、ちゃんと学習できたら正解の5(ただし、デフォルトのハイパーパラメーターだとそこまで成功しないのでもう少し大きい値)
が算出されるはずです。
やってみます。
for i in range(1, 7):
# トピック数を指定してモデルを学習
lda = LdaModel(corpus, num_topics=i, id2word=dictionary)
print(f"トピック数: {i}, Perplexity: {np.exp(-lda.log_perplexity(corpus))}")
"""
トピック数: 1, Perplexity: 20.032145913774283
トピック数: 2, Perplexity: 11.33724134037765
トピック数: 3, Perplexity: 8.921203895821304
トピック数: 4, Perplexity: 7.436279264160588
トピック数: 5, Perplexity: 7.558708610631221
トピック数: 6, Perplexity: 5.892976661122544
"""
大体想定通りの結果は出ましたね。
さて、 log_perplexity は perplexity の対数では無く、
perplexity 対数の符号を反転させたものである、と言うのは認識しておかないと大間違いの元になります。
というのも、perplexityは小さい方が良いとされる指標です。
と考えると、log_perplexityの出力が小さいパラメーターを選んでしまいがちですがそれは誤りであることがわかります。
対数を取ってから符号を反転させているので、大きいものを採用しないといけないのですね。
(この他、必ずしもPerplexityが小さければいいモデルだとは言えない、と言う議論もあるのですが、
今日の記事の範囲は超えますので省略します。)
|
题目来自:Python 练习册。
题目2.1: 你有一个目录,放了你一个月的日记,都是 txt,为了避免分词的问题,假设内容都是英文,请统计出你认为每篇日记最重要的词。
参考代码
#coding: utf-8
import re, os
from collections import Counter
# 目标文件所在目录
PATH = 'D:'
def getCounter(source):
#输入一个英文的纯文本文件,统计其中的单词出现的个数
with open(source) as f:
data = f.read()
data = data.lower()#字母全部小写
datalist = re.split(r'[\s]+', data)#根据空白字符,将data进行划分
return Counter(datalist)
def run(PATH):
# 切换到目标文件所在目录
os.chdir(PATH)
# 遍历该目录下的txt文件
total_counter = Counter() # 生成Counter()对象
for i in os.listdir(os.getcwd()):
if os.path.splitext(i)[1] == '.txt':#分离扩展名
total_counter += getCounter(i)# 多个Counter()叠加
return total_counter.most_common()#Counter对象转化为list格式
if __name__ == '__main__':
dic = run(PATH)
for i in range(len(dic)):
print('%15s ----> %3s' % (dic[i][0],dic[i][1]))
出现的错误
编码问题
UnicodeDecodeError: 'gbk' codec can't decode byte...
两种解决方法:
decode('utf-8')重新编码一下
fp = open(filename,'rb')
content = fp.read().decode('utf-8')
open方法指定参数encoding='UTF-8':
content= open('filename', mode='rb', encoding='UTF-8')
但是得注意一下,原文到底是不是UTF-8编码。反正Python编码这里是个大坑,多加小心。
文件名、目录名或卷标语法不正确
Path里边的斜杠是 / 不是 \PATH = 'E:/Python/pydata-book-master/ch02'
AttributeError: 'list' object has no attribute '…
看看那个object到底是什么,print(type(name)) ,然后再查查其对应的函数。
本文由 mmmwhy 创作,最后编辑时间为: May 2, 2019 at 01:53 pm
|
blob: 21a94804c9ce336a4188cb604b62f62d64877031 (
plain
)
#Copyright (c) 2007, Playful Invention Company
#Copyright (c) 2008-10, Walter Bender
#Copyright (c) 2009,10 Raul Gutierrez Segales
#Permission is hereby granted, free of charge, to any person obtaining a copy
#of this software and associated documentation files (the "Software"), to deal
#in the Software without restriction, including without limitation the rights
#to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
#copies of the Software, and to permit persons to whom the Software is
#furnished to do so, subject to the following conditions:
#The above copyright notice and this permission notice shall be included in
#all copies or substantial portions of the Software.
#THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
#IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
#FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
#AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
#LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
#OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
#THE SOFTWARE.
import pygtk
pygtk.require('2.0')
import gtk
import gobject
import logging
_logger = logging.getLogger('turtleart-activity')
from sugar.activity import activity
try: # 0.86 toolbar widgets
from sugar.activity.widgets import ActivityToolbarButton, StopButton
from sugar.graphics.toolbarbox import ToolbarBox, ToolbarButton
has_toolbarbox = True
except ImportError:
has_toolbarbox = False
from sugar.graphics.toolbutton import ToolButton
from sugar.datastore import datastore
from sugar import profile
from gettext import gettext as _
import os.path
import tarfile
from TurtleArt.tapalette import palette_names, help_strings
from TurtleArt.taconstants import OVERLAY_LAYER, ICON_SIZE
from TurtleArt.taexporthtml import save_html
from TurtleArt.taexportlogo import save_logo
from TurtleArt.tautils import data_to_file, data_to_string, data_from_string, \
get_path, chooser
from TurtleArt.tawindow import TurtleArtWindow
from TurtleArt.tacollaboration import Collaboration
class TurtleArtActivity(activity.Activity):
def __init__(self, handle):
""" Activity subclass for Turtle Art """
super(TurtleArtActivity, self).__init__(handle)
datapath = get_path(activity, 'data')
self._setup_visibility_handler()
self.has_toolbarbox = has_toolbarbox
self._setup_toolbar()
canvas = self._setup_scrolled_window()
self._check_ver_change(datapath)
self._setup_canvas(canvas)
self._setup_palette_toolbar()
self._setup_sharing()
# Activity toolbar callbacks
def do_save_as_html_cb(self, button):
""" Write html out to datastore. """
self.save_as_html.set_icon("htmlon")
_logger.debug("saving html code")
# until we have URLs for datastore objects, always embed images
embed_flag = True
# grab code from stacks
html = save_html(self, self.tw, embed_flag)
if len(html) == 0:
return
# save the html code to the instance directory
datapath = get_path(activity, 'instance')
save_type = '.html'
if len(self.tw.saved_pictures) > 0:
if self.tw.saved_pictures[0].endswith(('.svg')):
save_type = '.xml'
html_file = os.path.join(datapath, "portfolio" + save_type)
f = file(html_file, "w")
f.write(html)
f.close()
if embed_flag == False:
# need to make a tarball that includes the images
tar_path = os.path.join(datapath, 'portfolio.tar')
tar_fd = tarfile.open(tar_path, 'w')
try:
tar_fd.add(html_file, "portfolio.html")
import glob
image_list = glob.glob(os.path.join(datapath, 'image*'))
for i in image_list:
tar_fd.add(i, os.path.basename(i))
finally:
tar_fd.close()
# Create a datastore object
dsobject = datastore.create()
# Write any metadata (here we specifically set the title of the file
# and specify that this is a plain text file).
dsobject.metadata['title'] = self.metadata['title'] + " " + \
_("presentation")
dsobject.metadata['icon-color'] = profile.get_color().to_string()
if embed_flag == True:
if save_type == '.xml':
dsobject.metadata['mime_type'] = 'application/xml'
else:
dsobject.metadata['mime_type'] = 'text/html'
dsobject.set_file_path(html_file)
else:
dsobject.metadata['mime_type'] = 'application/x-tar'
dsobject.set_file_path(tar_path)
dsobject.metadata['activity'] = 'org.laptop.WebActivity'
datastore.write(dsobject)
dsobject.destroy()
gobject.timeout_add(250, self.save_as_html.set_icon, "htmloff")
self.tw.saved_pictures = []
return
def do_save_as_logo_cb(self, button):
""" Write logo code out to datastore. """
self.save_as_logo.set_icon("logo-saveon")
logo_code_path = self._dump_logo_code()
if logo_code_path is None:
return
# Create a datastore object
dsobject = datastore.create()
# Write any metadata (here we specifically set the title of the file
# and specify that this is a plain text file).
dsobject.metadata['title'] = self.metadata['title'] + ".lg"
dsobject.metadata['mime_type'] = 'text/plain'
dsobject.metadata['icon-color'] = profile.get_color().to_string()
# Set the file_path in the datastore.
dsobject.set_file_path(logo_code_path)
datastore.write(dsobject)
gobject.timeout_add(250, self.save_as_logo.set_icon, "logo-saveoff")
return
def do_load_ta_project_cb(self, button):
""" Load a project from the Journal """
chooser(self, 'org.laptop.TurtleArtActivity', self._load_ta_project)
def _load_ta_project(self, dsobject):
""" Load a ta project from the datastore """
try:
_logger.debug("opening %s " % dsobject.file_path)
self.read_file(dsobject.file_path, False)
except:
_logger.debug("couldn't open %s" % dsobject.file_path)
def do_load_python_cb(self, button):
""" Load Python code from the Journal. """
self.load_python.set_icon("pippy-openon")
self.tw.load_python_code_from_file(fname=None, add_new_block=True)
gobject.timeout_add(250, self.load_python.set_icon, "pippy-openoff")
def do_save_as_image_cb(self, button):
""" Save the canvas to the Journal. """
self.save_as_image.set_icon("image-saveon")
_logger.debug("saving image to journal")
self.tw.save_as_image()
gobject.timeout_add(250, self.save_as_image.set_icon, "image-saveoff")
return
def do_keep_cb(self, button):
""" Save a snapshot of the project to the Journal. """
tmpfile = self._dump_ta_code()
if tmpfile is not None:
# Create a datastore object
dsobject = datastore.create()
# Write any metadata
dsobject.metadata['title'] = self.metadata['title'] + " " + \
_("snapshot")
dsobject.metadata['icon-color'] = profile.get_color().to_string()
dsobject.metadata['mime_type'] = 'application/x-turtle-art'
dsobject.metadata['activity'] = 'org.laptop.TurtleArtActivity'
dsobject.set_file_path(tmpfile)
datastore.write(dsobject)
# Clean up
dsobject.destroy()
os.remove(tmpfile)
return
# Main/palette toolbar button callbacks
def do_palette_cb(self, button):
""" Show/hide palette """
if self.tw.palette == True:
self.tw.hideshow_palette(False)
self.do_hidepalette()
if self.has_toolbarbox and self.tw.selected_palette is not None:
self.palette_buttons[self.tw.selected_palette].set_icon(
palette_names[self.tw.selected_palette] + 'off')
else:
self.tw.hideshow_palette(True)
self.do_showpalette()
if self.has_toolbarbox:
self.palette_buttons[0].set_icon(palette_names[0] + 'on')
def do_palette_buttons_cb(self, button, i):
""" Palette selector buttons """
if self.tw.selected_palette is not None:
self.palette_buttons[self.tw.selected_palette].set_icon(
palette_names[self.tw.selected_palette] + 'off')
if self.tw.selected_palette == i:
# second click so hide the palette (#2505)
self.tw.hideshow_palette(False)
self.do_hidepalette()
return
self.palette_buttons[i].set_icon(palette_names[i] + 'on')
self.tw.show_palette(i)
self.do_showpalette()
# These methods are called both from buttons and palette.
def do_hidepalette(self):
""" Hide the palette. """
if hasattr(self, 'palette_button'):
self.palette_button.set_icon("paletteon")
self.palette_button.set_tooltip(_('Show palette'))
def do_showpalette(self):
""" Show the palette. """
if hasattr(self, 'palette_button'):
self.palette_button.set_icon("paletteoff")
self.palette_button.set_tooltip(_('Hide palette'))
def do_hideshow_cb(self, button):
""" Toggle visibility. """
self.tw.hideshow_button()
if self.tw.hide == True: # we just hid the blocks
self.blocks_button.set_icon("hideshowon")
self.blocks_button.set_tooltip(_('Show blocks'))
else:
self.blocks_button.set_icon("hideshowoff")
self.blocks_button.set_tooltip(_('Hide blocks'))
# update palette buttons too
if self.tw.palette == False:
self.do_hidepalette()
else:
self.do_showpalette()
def do_hide(self):
""" Hide blocks. """
self.blocks_button.set_icon("hideshowon")
self.blocks_button.set_tooltip(_('Show blocks'))
self.do_hidepalette()
def do_show(self):
""" Show blocks. """
self.blocks_button.set_icon("hideshowoff")
self.blocks_button.set_tooltip(_('Hide blocks'))
self.do_showpalette()
def do_eraser_cb(self, button):
""" Clear the screen and recenter. """
self.eraser_button.set_icon("eraseroff")
self.recenter()
self.tw.eraser_button()
gobject.timeout_add(250, self.eraser_button.set_icon, "eraseron")
def do_run_cb(self, button):
""" Callback for run button (rabbit). """
self.run_button.set_icon("run-faston")
self.tw.lc.trace = 0
self.tw.run_button(0)
gobject.timeout_add(1000, self.run_button.set_icon, "run-fastoff")
def do_step_cb(self, button):
""" Callback for step button (turtle). """
self.step_button.set_icon("run-slowon")
self.tw.lc.trace = 1
self.tw.run_button(3)
gobject.timeout_add(1000, self.step_button.set_icon, "run-slowoff")
def do_debug_cb(self, button):
""" Callback for debug button (bug). """
self.debug_button.set_icon("debugon")
self.tw.lc.trace = 1
self.tw.run_button(9)
gobject.timeout_add(1000, self.debug_button.set_icon, "debugoff")
def do_stop_cb(self, button):
""" Callback for stop button. """
self.stop_turtle_button.set_icon("stopitoff")
self.tw.stop_button()
self.step_button.set_icon("run-slowoff")
self.run_button.set_icon("run-fastoff")
def do_samples_cb(self, button):
""" Sample projects open dialog """
# FIXME: encapsulation!
self.tw.load_file(True)
# run the activity
self.stop_turtle_button.set_icon("stopiton")
self.tw.run_button(0)
def recenter(self):
""" Recenter scrolled window around canvas. """
hadj = self.sw.get_hadjustment()
hadj.set_value(0)
self.sw.set_hadjustment(hadj)
vadj = self.sw.get_vadjustment()
vadj.set_value(0)
self.sw.set_vadjustment(vadj)
def do_fullscreen_cb(self, button):
""" Hide the Sugar toolbars. """
self.fullscreen()
self.recenter()
def do_grow_blocks_cb(self, button):
""" Grow the blocks. """
self.do_resize_blocks(1.5)
def do_shrink_blocks_cb(self, button):
""" Shrink the blocks. """
self.do_resize_blocks(0.67)
def do_resize_blocks(self, scale_factor):
""" Scale the blocks. """
self.tw.block_scale *= scale_factor
self.tw.resize_blocks()
def do_cartesian_cb(self, button):
""" Display Cartesian coordinate grid. """
if self.tw.cartesian:
self.tw.set_cartesian(False)
else:
self.tw.set_cartesian(True)
def do_polar_cb(self, button):
""" Display Polar coordinate grid. """
if self.tw.polar:
self.tw.set_polar(False)
else:
self.tw.set_polar(True)
def do_rescale_cb(self, button):
""" Rescale coordinate system (100==height/2 or 100 pixels). """
if self.tw.cartesian:
cartesian = True
self.tw.set_cartesian(False)
else:
cartesian = False
if self.tw.polar:
polar = True
self.tw.set_polar(False)
else:
polar = False
if self.tw.coord_scale == 1:
self.tw.coord_scale = self.tw.height / 200
self.rescale_button.set_icon("contract-coordinates")
self.rescale_button.set_tooltip(_('Rescale coordinates down'))
else:
self.tw.coord_scale = 1
self.rescale_button.set_icon("expand-coordinates")
self.rescale_button.set_tooltip(_('Rescale coordinates up'))
self.tw.eraser_button()
if cartesian:
self.tw.set_cartesian(True)
if polar:
self.tw.set_polar(True)
def get_document_path(self, async_cb, async_err_cb):
""" View TA code as part of view source. """
ta_code_path = self._dump_ta_code()
if ta_code_path is not None:
async_cb(ta_code_path)
def _dump_logo_code(self):
""" Save Logo code to temporary file. """
datapath = get_path(activity, 'instance')
tmpfile = os.path.join(datapath, 'tmpfile.lg')
code = save_logo(self.tw)
if len(code) == 0:
_logger.debug('save_logo returned None')
return None
try:
f = file(tmpfile, "w")
f.write(code)
f.close()
except Exception, e:
_logger.error("Couldn't dump code to view source: " + str(e))
return tmpfile
def _dump_ta_code(self):
""" Save TA code to temporary file. """
datapath = get_path(activity, 'instance')
tmpfile = os.path.join(datapath, 'tmpfile.ta')
try:
data_to_file(self.tw.assemble_data_to_save(), tmpfile)
except:
_logger.debug("couldn't save snapshot to journal")
tmpfile = None
return tmpfile
def __visibility_notify_cb(self, window, event):
""" Callback method for when the activity's visibility changes. """
if event.state == gtk.gdk.VISIBILITY_FULLY_OBSCURED:
self.tw.background_plugins()
elif event.state in \
[gtk.gdk.VISIBILITY_UNOBSCURED, gtk.gdk.VISIBILITY_PARTIAL]:
self.tw.foreground_plugins()
def update_title_cb(self, widget, event, toolbox):
""" Update the title. """
toolbox._activity_toolbar._update_title_cb()
toolbox._activity_toolbar._update_title_sid = True
def _keep_clicked_cb(self, button):
""" Keep button clicked. """
self.jobject_new_patch()
def _setup_toolbar(self):
""" Setup toolbar according to Sugar version """
if self.has_toolbarbox:
# Use 0.86 toolbar design
# Create toolbox and secondary toolbars
self._toolbox = ToolbarBox()
activity_toolbar_button = ActivityToolbarButton(self)
edit_toolbar = gtk.Toolbar()
edit_toolbar_button = ToolbarButton(label=_('Edit'),
page=edit_toolbar,
icon_name='toolbar-edit')
view_toolbar = gtk.Toolbar()
view_toolbar_button = ToolbarButton(label=_('View'),
page=view_toolbar,
icon_name='toolbar-view')
self._palette_toolbar = gtk.Toolbar()
self._palette_toolbar_button = ToolbarButton(
page=self._palette_toolbar, icon_name='palette')
help_toolbar = gtk.Toolbar()
help_toolbar_button = ToolbarButton(label=_("Help"),
page=help_toolbar,
icon_name='help-toolbar')
journal_toolbar = gtk.Toolbar()
journal_toolbar_button = ToolbarButton(page=journal_toolbar,
icon_name='activity-journal')
# Add the toolbars and buttons to the toolbox
activity_toolbar_button.show()
self._toolbox.toolbar.insert(activity_toolbar_button, -1)
edit_toolbar_button.show()
self._toolbox.toolbar.insert(edit_toolbar_button, -1)
journal_toolbar_button.show()
self._toolbox.toolbar.insert(journal_toolbar_button, -1)
view_toolbar_button.show()
self._toolbox.toolbar.insert(view_toolbar_button, -1)
self._palette_toolbar_button.show()
self._toolbox.toolbar.insert(self._palette_toolbar_button, -1)
help_toolbar_button.show()
self._toolbox.toolbar.insert(help_toolbar_button, -1)
self._add_separator(self._toolbox.toolbar)
self._make_project_buttons(self._toolbox.toolbar)
self._add_separator(self._toolbox.toolbar, True)
stop_button = StopButton(self)
stop_button.props.accelerator = '<Ctrl>Q'
self._toolbox.toolbar.insert(stop_button, -1)
stop_button.show()
else:
# Use pre-0.86 toolbar design
self._toolbox = activity.ActivityToolbox(self)
self.set_toolbox(self._toolbox)
project_toolbar = gtk.Toolbar()
self._toolbox.add_toolbar(_('Project'), project_toolbar)
view_toolbar = gtk.Toolbar()
self._toolbox.add_toolbar(_('View'), view_toolbar)
view_toolbar_button = view_toolbar
edit_toolbar = gtk.Toolbar()
self._toolbox.add_toolbar(_('Edit'), edit_toolbar)
edit_toolbar_button = edit_toolbar
journal_toolbar = gtk.Toolbar()
self._toolbox.add_toolbar(_('Import/Export'), journal_toolbar)
journal_toolbar_button = journal_toolbar
help_toolbar = gtk.Toolbar()
self._toolbox.add_toolbar(_('Help'), help_toolbar)
help_toolbar_button = help_toolbar
self._make_palette_buttons(project_toolbar, palette_button=True)
self._add_separator(project_toolbar)
self._make_project_buttons(project_toolbar)
self.keep_button = self._add_button('filesaveoff', _("Save snapshot"),
self.do_keep_cb,
journal_toolbar_button)
self.save_as_html = self._add_button('htmloff', _("Save as HTML"),
self.do_save_as_html_cb,
journal_toolbar_button)
self.save_as_logo = self._add_button('logo-saveoff', _("Save as Logo"),
self.do_save_as_logo_cb,
journal_toolbar_button)
self.save_as_image = self._add_button('image-saveoff', _(
"Save as image"),
self.do_save_as_image_cb,
journal_toolbar_button)
self.load_ta_project = self._add_button('load-from-journal',
_("Import project from the Journal"), self.do_load_ta_project_cb,
journal_toolbar_button)
self._add_separator(journal_toolbar)
self.load_python = self._add_button('pippy-openoff', _(
"Load Python block"),
self.do_load_python_cb,
journal_toolbar_button)
self.samples_button = self._add_button("ta-open", _('Load example'),
self.do_samples_cb, journal_toolbar_button)
copy = self._add_button('edit-copy', _('Copy'), self._copy_cb,
edit_toolbar_button, '<Ctrl>c')
paste = self._add_button('edit-paste', _('Paste'), self._paste_cb,
edit_toolbar_button, '<Ctrl>v')
fullscreen_button = self._add_button('view-fullscreen',
_("Fullscreen"), self.do_fullscreen_cb,
view_toolbar_button, '<Alt>Return')
cartesian_button = self._add_button('view-Cartesian',
_("Cartesian coordinates"),
self.do_cartesian_cb,
view_toolbar_button)
polar_button = self._add_button('view-polar', _("Polar coordinates"),
self.do_polar_cb, view_toolbar_button)
self._add_separator(view_toolbar)
self.coordinates_label = self._add_label(
_("xcor") + " = 0 " + _("ycor") + " = 0 " + _("heading") + " = 0",
view_toolbar)
self._add_separator(view_toolbar, True)
self.rescale_button = self._add_button('expand-coordinates',
_("Rescale coordinates up"), self.do_rescale_cb,
view_toolbar_button)
self.resize_up_button = self._add_button('resize+', _("Grow blocks"),
self.do_grow_blocks_cb, view_toolbar_button)
self.resize_down_button = self._add_button('resize-',
_("Shrink blocks"), self.do_shrink_blocks_cb, view_toolbar_button)
self.hover_help_label = self._add_label(
_("Move the cursor over the orange palette for help."),
help_toolbar, gtk.gdk.screen_width() - 2 * ICON_SIZE)
# Setup palette toolbar only AFTER initializing the plugins
# self._setup_palette_toolbar()
edit_toolbar.show()
view_toolbar.show()
help_toolbar.show()
self._toolbox.show()
if self.has_toolbarbox:
# Hack as a workaround for #2050
edit_toolbar_button.set_expanded(True)
edit_toolbar_button.set_expanded(False)
self._palette_toolbar_button.set_expanded(True)
else:
self._toolbox.set_current_toolbar(1)
def _setup_palette_toolbar(self):
# The palette toolbar is only used with 0.86+
if self.has_toolbarbox:
self.palette_buttons = []
for i, name in enumerate(palette_names):
if i > 0:
suffix = 'off'
else:
suffix = 'on'
self.palette_buttons.append(self._add_button(name + suffix,
help_strings[name], self.do_palette_buttons_cb,
self._palette_toolbar_button, None, i))
self._add_separator(self._palette_toolbar, True)
self._make_palette_buttons(self._palette_toolbar_button)
self.set_toolbar_box(self._toolbox)
self._palette_toolbar.show()
def _make_palette_buttons(self, toolbar, palette_button=False):
""" Creates the palette and block buttons for both toolbar types"""
if palette_button: # old-style toolbars need this button
self.palette_button = self._add_button("paletteoff", _(
'Hide palette'),
self.do_palette_cb, toolbar, _('<Ctrl>p'))
self.blocks_button = self._add_button("hideshowoff", _('Hide blocks'),
self.do_hideshow_cb, toolbar, _('<Ctrl>b'))
def _make_project_buttons(self, toolbar):
""" Creates the turtle buttons for both toolbar types"""
self.eraser_button = self._add_button("eraseron", _('Clean'),
self.do_eraser_cb, toolbar, _('<Ctrl>e'))
self.run_button = self._add_button("run-fastoff", _('Run'),
self.do_run_cb, toolbar, _('<Ctrl>r'))
self.step_button = self._add_button("run-slowoff", _('Step'),
self.do_step_cb, toolbar, _('<Ctrl>w'))
self.debug_button = self._add_button("debugoff", _('Debug'),
self.do_debug_cb, toolbar, _('<Ctrl>d'))
self.stop_turtle_button = self._add_button("stopitoff",
_('Stop turtle'), self.do_stop_cb, toolbar, _('<Ctrl>s'))
def _setup_scrolled_window(self):
""" Create a scrolled window to contain the turtle canvas. """
self.sw = gtk.ScrolledWindow()
self.set_canvas(self.sw)
self.sw.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
self.sw.show()
canvas = gtk.DrawingArea()
width = gtk.gdk.screen_width() * 2
height = gtk.gdk.screen_height() * 2
canvas.set_size_request(width, height)
self.sw.add_with_viewport(canvas)
canvas.show()
return canvas
def _check_ver_change(self, datapath):
""" To be replaced with date checking. """
# Check to see if the version has changed
try:
version = os.environ['SUGAR_BUNDLE_VERSION']
except KeyError:
version = "unknown"
filename = "version.dat"
version_data = []
new_version = True
try:
file_handle = open(os.path.join(datapath, filename), "r")
if file_handle.readline() == version:
new_version = False
file_handle.close()
except IOError:
_logger.debug("Couldn't read version number")
version_data.append(version)
try:
file_handle = open(os.path.join(datapath, filename), "w")
file_handle.writelines(version_data)
file_handle.close()
except IOError:
_logger.debug("Couldn't write version number")
return new_version
def _setup_canvas(self, canvas):
""" Initialize the turtle art canvas. """
bundle_path = activity.get_bundle_path()
self.tw = TurtleArtWindow(canvas, bundle_path, self,
profile.get_color().to_string(),
profile.get_nick_name())
# self.tw.activity = self
self.tw.window.grab_focus()
path = os.path.join(os.environ['SUGAR_ACTIVITY_ROOT'], 'data')
self.tw.save_folder = path
if self._jobject and self._jobject.file_path:
self.read_file(self._jobject.file_path)
else: # if new, load a start brick onto the canvas
self.tw.load_start()
def _setup_sharing(self):
self._collaboration = Collaboration(self.tw, self)
self._collaboration.setup()
def _setup_visibility_handler(self):
""" Notify when the visibility state changes """
self.add_events(gtk.gdk.VISIBILITY_NOTIFY_MASK)
self.connect("visibility-notify-event", self.__visibility_notify_cb)
def write_file(self, file_path):
""" Write the project to the Journal. """
_logger.debug("Write file: %s" % file_path)
self.metadata['mime_type'] = 'application/x-turtle-art'
data_to_file(self.tw.assemble_data_to_save(), file_path)
def read_file(self, file_path, run_it=True):
""" Read a project in and then run it. """
import os
import tempfile
import shutil
if hasattr(self, 'tw'):
_logger.debug("Read file: %s" % (file_path))
# Could be a gtar (newer builds) or tar (767) file
if file_path.endswith(('.gtar', '.tar')):
tar_fd = tarfile.open(file_path, 'r')
tmpdir = tempfile.mkdtemp()
try:
# We'll get 'ta_code.ta' and possibly a 'ta_image.png'
# but we will ignore the .png file
# If run_it is True, we want to create a new project
tar_fd.extractall(tmpdir)
self.tw.load_files(os.path.join(tmpdir, 'ta_code.ta'), \
run_it) # create a new project flag
finally:
shutil.rmtree(tmpdir)
tar_fd.close()
# Otherwise, assume it is a .ta file
else:
_logger.debug("trying to open a .ta file:" + file_path)
self.tw.load_files(file_path, run_it)
# run the activity
if run_it:
self.stop_turtle_button.set_icon("stopiton")
self.tw.run_button(0)
else:
_logger.debug("Deferring reading file %s" % (file_path))
def jobject_new_patch(self):
""" Save instance to Journal. """
oldj = self._jobject
self._jobject = datastore.create()
self._jobject.metadata['title'] = oldj.metadata['title']
self._jobject.metadata['title_set_by_user'] = \
oldj.metadata['title_set_by_user']
# self._jobject.metadata['activity'] = self.get_service_name()
self._jobject.metadata['activity_id'] = self.get_id()
self._jobject.metadata['keep'] = '0'
# Is this the correct syntax for saving the buddies list?
# self._jobject.metadata['buddies'] = self.tw.buddies
self._jobject.metadata['preview'] = ''
self._jobject.metadata['icon-color'] = profile.get_color().to_string()
self._jobject.file_path = ''
datastore.write(self._jobject,
reply_handler=self._internal_jobject_create_cb,
error_handler=self._internal_jobject_error_cb)
self._jobject.destroy()
def _copy_cb(self, button):
clipBoard = gtk.Clipboard()
_logger.debug("serialize the project and copy to clipboard")
data = self.tw.assemble_data_to_save(False, False)
if data is not []:
text = data_to_string(data)
clipBoard.set_text(text)
self.tw.paste_offset = 20
def _paste_cb(self, button):
clipBoard = gtk.Clipboard()
_logger.debug("paste to the project")
text = clipBoard.wait_for_text()
if text is not None:
if self.tw.selected_blk is not None and \
self.tw.selected_blk.name == 'string':
for i in text:
self.tw.process_alphanumeric_input(i, -1)
self.tw.selected_blk.resize()
else:
self.tw.process_data(data_from_string(text),
self.tw.paste_offset)
self.tw.paste_offset += 20
def _add_label(self, string, toolbar, width=None):
""" add a label to a toolbar """
label = gtk.Label(string)
label.set_line_wrap(True)
if width is not None:
label.set_size_request(width, -1)
label.show()
toolitem = gtk.ToolItem()
toolitem.add(label)
toolbar.insert(toolitem, -1)
toolitem.show()
return label
def _add_separator(self, toolbar, expand=False):
""" add a separator to a toolbar """
separator = gtk.SeparatorToolItem()
separator.props.draw = True
separator.set_expand(expand)
toolbar.insert(separator, -1)
separator.show()
def _add_button(self, name, tooltip, callback, toolbar, accelerator=None,
arg=None):
""" add a button to a toolbar """
button = ToolButton(name)
button.set_tooltip(tooltip)
if arg is None:
button.connect('clicked', callback)
else:
button.connect('clicked', callback, arg)
if accelerator is not None:
try:
button.props.accelerator = accelerator
except AttributeError:
pass
button.show()
if hasattr(toolbar, 'insert'): # the main toolbar
toolbar.insert(button, -1)
else: # or a secondary toolbar
toolbar.props.page.insert(button, -1)
if not name in help_strings:
help_strings[name] = tooltip
return button
|
Введение в спортивную аналитику с помощью Pandas
Спортивная аналитика - одно из важнейших направлений науки о данных. Прогресс в методах сбора и анализа данных сделал команды более привлекательными для адаптации стратегий, основанных на аналитике данных.
Аналитика данных дает ценную информацию как об эффективности команды, так и о производительности игроков. При разумном и систематическом использовании аналитика данных, скорее всего, выведет команды впереди конкурентов.
В некоторых клубах есть целая команда, занимающаяся аналитикой данных. Ливерпуль - пионер в использовании аналитики данных, что, на мой взгляд, является важной частью их успеха. Они последний чемпион Премьер-лиги и победитель Лиги чемпионов 2019 года.
В этом посте мы будем использовать Pandas, чтобы получить значимые результаты матчей немецкой Бундеслиги в сезоне 2017–18. Наборы данных можно скачать по ссылке. Мы будем использовать часть наборов данных, представленных в документе «Открытый набор данных пространственно-временных событий матчей в футбольных соревнованиях».
Наборы данных сохраняются в формате JSON, который можно легко прочитать в фреймах данных pandas.
import numpy as np
import pandas as pd
events = pd.read_json("/content/events_Germany.json")
matches = pd.read_json("/content/matches_Germany.json")
teams = pd.read_json("/content/teams.json")
players = pd.read_json("/content/players.json")
events.head()
Фрейм данных событий содержит подробную информацию о событиях, произошедших в матчах. Например, первая строка сообщает нам, что игрок 15231 сделал «простой пас» из позиции (50,50) в (50,48) в третью секунду матча 2516739.
Фрейм данных событий включает идентификаторы игроков и команд, но не имена игроков и команд. Мы добавим их из фреймов данных команд и игроков, используя функцию слияния.
Идентификаторы хранятся в столбце «wyId» в фреймах данных команд и игроков.
#merge with teams
events = pd.merge(
events, teams[['name','wyId']],
left_on='teamId', right_on='wyId'
)
events.rename(columns={'name':'teamName'}, inplace=True)
events.drop('wyId', axis=1, inplace=True)
#merge with players
events = pd.merge(
events, players[['wyId','shortName','firstName']],
left_on='playerId', right_on='wyId'
)
events.rename(
columns={'shortName':'playerName', 'firstName':'playerFName'},
inplace=True
)
events.drop('wyId', axis=1, inplace=True)
Мы объединили фреймы данных на основе столбцов, содержащих идентификаторы, а затем переименовали новые столбцы. Наконец, столбец «wyId» удаляется, потому что идентификаторы уже хранятся в кадре данных событий.
Среднее количество передач за матч
Команды, которые доминируют в игре, обычно делают больше передач. В целом у них больше шансов выиграть матч. Конечно, есть исключения.
Давайте проверим среднее количество передач за матч для каждой команды. Сначала мы создадим фрейм данных, содержащий название команды, идентификатор матча и количество проходов, выполненных в этом матче.
pass_per_match = events[events.eventName == 'Pass']\[['teamName','matchId','eventName']]\
.groupby(['teamName','matchId']).count()\
.reset_index().rename(columns={'eventName':'numberofPasses'})
Аугсбург сделал 471 передачу в матче 2516745. Вот список 5 лучших команд по количеству передач за матч.
pass_per_match[['teamName','numberofPasses']]\
.groupby('teamName').mean()\
.sort_values(by='numberofPasses', ascending=False).round(1)[:5]
Неудивительно, что у «Баварии» больше всего передач. В последние годы они доминируют в Бундеслиге.
Средняя длина передачи игроков
Успешный результат можно оценить по многим параметрам. Некоторые передачи настолько успешны, что очень легко забить.
Мы сосредоточимся на количественной оценке проходов, то есть на длине. Некоторые игроки очень хороши в длинных передачах.
Столбец позиций содержит начальное и конечное положение мяча в координатах x и y. Мы можем рассчитать длину на основе этих координат. Давайте сначала создадим фрейм данных, который содержит только проходы.
passes = events[events.eventName=='Pass'].reset_index(drop=True)
Теперь мы можем рассчитать длину.
pass_length = []
for i in range(len(passes)):
length = np.sqrt(((passes.positions[i][0]['x'] -
passes.positions[i][1]['x'])**2)\ +
((passes.positions[i][0]['y'] -
passes.positions[i][1]['y'])**2))
pass_length.append(length)
passes['pass_length'] = pass_length
Функцию groupby можно использовать для расчета средней длины передачи для каждого игрока.
passes[['playerName','pass_length']].groupby('playerName')\
.agg(['mean','count']).\
sort_values(by=('pass_length','mean'), ascending=False).round(1)[:5]
Мы перечислили 5 лучших игроков по средней длине передач, а также по количеству выполненных передач. Количество проходов важно, потому что выполнение всего 3 проходов не имеет большого значения по отношению к среднему значению. Таким образом, мы можем фильтровать те, которые меньше определенного количества проходов.
Среднее количество передач для выигрыша и не-выигрыша
Давайте сравним среднее количество передач между выигранными и невыигрышными матчами. Я буду использовать в качестве примера сопоставление Б. Леверкузена.
events = pd.merge(events, matches[['wyId','winner']], left_on='matchId', right_on='wyId')
events.drop('wyId', axis=1, inplace=True)
Теперь мы можем создать фрейм данных, который содержит только события, идентификатор команды которых равен 2446 (идентификатор Б. Леверкузена).
leverkusen = events[events.teamId == 2446]
Победителем становится Б. Леверкузен, если значение в столбце «победитель» равно 2446. Чтобы вычислить среднее количество передач в матчах, в которых выиграл Б. Леверкузен, нам необходимо отфильтровать фрейм данных на основе победителя и имени события. Затем мы применим группировку и подсчет, чтобы увидеть количество передач за матч.
passes_in_win = leverkusen[(leverkusen.winner == 2446) & (leverkusen.eventName == 'Pass')][['matchId','eventName']].groupby('matchId').count()
passes_in_notwin = leverkusen[(leverkusen.winner != 2446) & (leverkusen.eventName == 'Pass')][['matchId','eventName']].groupby('matchId').count()
Мы можем легко получить среднее количество проходов, применив функцию среднего.
Хотя выполнение большего количества передач не означает определенной победы, это поможет вам доминировать в игре и повысить ваши шансы на выигрыш.
Возможности спортивной аналитики выходят далеко за рамки того, что мы сделали в этом посте. Однако без ознакомления с основами будет труднее усвоить знания о более сложных методах.
|
Nowadays streaming data is a very crucial data source for any business that wants to perform real-time analytics. The first step to analyze data in real-time is to load the streaming data – often from Webhooks, in real-time to the warehouse. In this article, you will learn how to load real-time streaming data from Webhook to BigQuery. But first, let us briefly understand these systems.
What is Webhook?
Webhook is a very beneficial and resource-light way to perform and capture event reaction. WebHook has a mechanism to notify client application whenever there is any new event happening over the server-side.
Webhooks are also known as Reverse API. Usually, under normal circumstances in an API, the client-side makes a call to the server-side application. However, in the case of webhooks, the reverse happens. It is the server-side that calls the webhook. i.e the server-side calls the client-side.
As WebHook calls the client application, it avoids continuous polling of client application to the server application for any new updates. You can read more about webhooks here.
What is Google BigQuery?
BigQuery is a Google-managed cloud-based data warehouse service. BigQuery is a dedicated store that is used to process and analyze huge volumes of data in seconds. Its unique architecture allows to automatically scale both up and down based on the volume of data and query complexity.
In addition to its high-performance features, BigQuery also takes care of all resource management. It pretty much works out of the box. You can read more about BigQuery features here.
Streaming Data from Webhook to BigQuery
Streaming Data from Webhook to BigQuery
One of the below-mentioned approaches can be used to load streaming data to BigQuery:
Method 1: Use a fully-managed Data Integration Platform like Hevo Data that lets you move data without writing a single line of code (comes with a 14-day free trial)
Method 2: Build custom scripts to configure ETL jobs to perform the data load
In this post, we will cover the second method (Custom Code) in detail. Towards the end of the post, you can also find a quick comparison of both data streaming methods so that you can assess your requirements and choose judiciously.
Webhook to BigQuery ETL Using Custom Code:
The steps involved in migrating data from WebHook to BigQuery are as follows:
Getting data out of your application using Webhook.
Preparing Data received from Webhook.
Loading data into Google BigQuery.
Step 1: Getting data out of your application using Webhook
Setup a webhook for your application and define the endpoint URL on which you will deliver the data. This is the same URL from which the target application will read the data.
Step 2: Preparing Data received from Webhook
Webhooks post data to your specified endpoints in JSON format. It is up to you to parse the JSON objects and determine how to load them into your BigQuery data warehouse.
You need to make sure the target BigQuery table is well aligned with source data layout, specifically column sequence and data type of columns.
Step 3: Loading data into Google BigQuery
We can load data into BigQuery directly using API call or can create CSV file and then load into BigQuery table.
Create a Python script to read data from Webhook URL endpoint and load into BigQuery table.
from google.cloud import bigquery
import requests
client = bigquery.Client()
dataset_id = 'dataset_name'
#replace with your dataset ID
table_id = 'table_name'
#replace with your table ID
table_ref = client.dataset(dataset_id).table(table_id)
table = client.get_table(table_ref) # API request
receive data from WebHook
Convert received data into rows to insert into BigQuery
errors = client.insert_rows(table, rows_to_insert)# API request
assert errors == []
You can store streaming data into a file by a specific interval and use the bq command-line tool to upload the files to your datasets, adding schema and data type information. In GCP documentation of GSUTIL tool you can find the syntax of bq command line. Iterate through this process as many times as it takes to load all of your tables into BigQuery.
Once the data has been extracted from your application using Webhook, the next step is to upload it to the GCS. There are multiple techniques to upload data to GCS.
Upload file to GCS bucket
Using Gsutil: Using gsutil utility we can upload a local file to GCS(Google Cloud Storage) bucket.
gsutil cp local_folder/file_name.csv gs://gcs_bucket_name/path/to/folder/
To copy a file to GCS:
Using Web consoleAn alternative way to upload the data from your local machine to GCS is using the web console. To use the web console option follow the below steps.
First of all, you need to login to your GCP account. You must have a working Google account of GCP. In the menu option, click on storage and navigate to the browser on the left tab.
If needed create a bucket to upload your data. Make sure that the name of the bucket you choose is globally unique.
Click on the bucket name that you have created in step #2, this will ask you to browse the file from your local machine.
Choose the file and click on the upload button. A progression bar will appear. Next, wait for the upload to complete. You can see the file is loaded in the bucket.
Create Table in BigQuery
Go to the BigQuery from the menu option.
On G-Cloud console, click on create a dataset option. Next, provide a dataset name and location.
Next, click on the name of the created dataset. On G-Cloud console, click on create table option and provide the dataset name, table name, project name, and table type.
Load the data into BigQuery Table
Start the command-line tool, click on the cloud shell icon shown here.
The syntax of the bq command line to load the file in the BigQuery table:
Note: Autodetect flag identifies the table schema
bq --location=[LOCATION] load --source_format=[FORMAT]
[DATASET].[TABLE] [PATH_TO_SOURCE] [SCHEMA]
[LOCATION] is an optional parameter that represents Location name like “us-east”
[FORMAT] to load CSV file set it to CSV [DATASET] dataset name.
[TABLE] table name to load the data.
[PATH_TO_SOURCE] path to source file present on the GCS bucket.
[SCHEMA] Specify the schema
bq --location=US load --source_format=CSV your_dataset.your_table gs://your_bucket/your_data.csv ./your_schema.json
You can specify your schema using bq command line
bq --location=US load --source_format=CSV your_dataset.your_table gs://your_bucket/your_data.csv ./your_schema.json
Your target table schema can also be autodetected:
bq --location=US load --autodetect --source_format=CSV your_dataset.your_table gs://mybucket/data.csv
BigQuery command-line interface allows us to 3 options to write to an existing table.
Overwrite the table
bq --location = US load --autodetect --replace --source_file_format = CSV your_target_dataset_name.your_target_table_name gs://source_bucket_name/path/to/file/source_file_name.csv
Append data to the table
bq --location = US load --autodetect --noreplace --source_file_format = CSV your_target_dataset_name.your_table_table_name gs://source_bucket_name/path/to/file/source_file_name.csv ./schema_file.json
Adding new fields in the target table
bq --location = US load --noreplace --schema_update_option = ALLOW_FIELD_ADDITION --source_file_format = CSV your_target_dataset.your_target_table gs://bucket_name/source_data.csv ./target_schema.json
Update data into BigQuery Table
The data that was matched in the above-mentioned steps not done complete data updates on the target table. The data is stored in an intermediate data table. This is because GCS is a staging area for BigQuery upload. There two ways of updating the target table as described here.
Update the rows in the target table. Next, insert new rows from the intermediate table
UPDATE target_table t
SET t.value = s.value
FROM intermediate_table s
WHERE t.id = s.id;
INSERT target_table (id, value)
SELECT id, value
FROM intermediate_table WHERE NOT id IN (SELECT id FROM target_table);
Delete all the rows from the target table which are in the intermediate table. Then, insert all the rows newly loaded in the intermediate table. Here the intermediate table will be in truncate and load mode.
DELETE FROM final_table f WHERE f.id IN (SELECT id from intermediate_table); INSERT data_setname.target_table(id, value) SELECT id, value FROM data_set_name.intermediate_table;
Limitations of writing custom Scripts to stream data from Webhook to BigQuery:
The above code is built based on a certain defined schema from the Webhook source. There are possibilities that the scripts break if the source schema is modified.
If in future you identify some data transformations need to be applied on your incoming webhook events, you would require to invest additional time and resources on it.
Overload of incoming data, you might have to throttle the data moving to BQ.
Given you are dealing with real-time streaming data you would need to build very strong alerts and notification systems to avoid data loss due to an anomaly at the source or destination end. Since webhooks are triggered by certain events, this data loss can be very grave for your business.
An easy way to stream data from Webhooks to BigQuery:
A Simple Way to Stream Data from Webhooks to BigQuery:
A much easy way to get rid of all complexities that come your way in the custom code method is by implementing a fully managed Data Pipeline solution like Hevo Data. Hevo can be set up in minutes and would help you move data from Webhooks to BigQuery in 2 simple steps:
Connect and configure your Webhook endpoint URL
Configure your BigQuery Warehouse the data has to be streamed
In addition to webhooks, Hevo can move data from a variety of data sources (Databases, Cloud Applications, SDKs and more). Hevo ensures that your data is reliably and securely moved from any source to BigQuery in real-time.
Before you go ahead and take a call on the right approach to move data from Webhook to BigQuery – Do experience Hevo’s hassle-free Data Pipeline platform by signing up for a 14-day free trial here.
|
Some time ago, while reviewing old samples reports passed through MalSilo, any caught my attention, below the main triggers of one of these.
- PE32 sample - some generic YARA rules matches - + AutoIT match - + DarkComent match- Persistance via schtasks.exe - svchost.exe connecting to exotic domain
Everything maliciously normal here, but after a quick check and some metadata quirks it turned out the specimen was packed with CypherIT.
As far as I can tell from few searches, the crypter is well advertised in forums and YouTube videos.
The first part of the analysis will take in consideration some of the sample’s layers, reaching its core with the RunPE (shellcode) section
The second one, will briefly map CypherIT website advertised features to its code components
The third section explores more packed malware, and their final payload thanks to MalSilo’s backends (one of which is MISP)
Finally yet importantly, we will have a look to a recent sample
Let’s start peeling …
Technical details of the sample are given below
First seen (MalSilo): 2018-11-28
File name: K2bkm.jpg
drop site: https[:]//f.coka[.]la/K2bkm.jpg
md5: 7ece8890e1e797843d68e73eb0ac2bc4
sha1: 4448b907f2c9f9ee8e4b13e4b6c292eef4b70930
sha256: 84d0c9352eacf92a919f972802a6a7949664f204f984aaf4b44da1b1aa7aa729
ssdeep: 24576:Fu6J33O0c+JY5UZ+XC0kGso6FapiE6kitdnsxWY:Hu0c++OCvkGs9FaeFY
Process execution flow
Behavior Graph
Pstree view
Detailed view
sample.exe | \_ (copy of) sample.exe \_ schtasks.exe 1368 /create /tn 5265676973747279204B6579204E616D65 /tr "C:\Users\[..]\AppData\Local\Temp\Folder Name\winint.exe" /sc minute /mo 1 /F
I - Runtime behaviour
In a nutshell, the following steps are executed by the program
sample.exe runs a basic anti-analysis check
AutoIT script body gets executed
AutoIT code:
Start execution logic
Remove Zone.Identifier
Create mutex
Sleep
Runs decryption routine on one embedded PE resource
Executes RunPE (via shellcode), in this case, self-process hollowing
Install persistance task
Start execution logic
Final payload (for this sample, DarkComet) does its dirty job
Layer 1
In order to avoid execution (if) monitored, the first layer of the loader only checks if it is being debugged, the same is achieved calling the good old isDebugPresent at offset 0x00403b7A.
Which, if it is the case, will lead to a fake message displayed to the user and simply stop executing.
At this stage, no additional anti-analysis checks are performed and the execution proceeds just flawless, passing control to the AutoIT code interpreter.
Layer 2
This sample in particular is not at all heavily obfuscated and many core functions still have a descriptive name (i.e. binder, startup, persistautoinject, […]).
Nevertheless, the overall code can be broken up in 3 blocks.
top: naive obfuscation of some main native AutoIT functions + basic strings obfuscation function
middle: core functionalities
bottom: main execution logic
Let’s start from the bottom up; below the main steps executed at start
Function name Actions
enhkoxkvrufrsntgjkoyxiard removes the Zone.identifier of the file ([sample]:Zone.identifier), this will avoid Windows to inform the user about the execution of a not trusted file
mutex its a custom implementation (via Windows API calls -> Kernel32.dll -> CreateMutexW) for creating / checking mutex and to ensure that only one instance of the infection is running
_cryptedfile it chains together multiple functions, but in the end it reads one PE resource from the sample file and decrypts it, making it available to the next function (injector)
dnqmjpfdpcuxwbwkadcaibgzw RunPE injection function, leveraging shellcode for loading the final payload (in this case Darkcomet)
startup the persistency module, this will install a task executed via schtasks.exe and will run the sample every minute
ughotphdsufuiehfpoegoakmi another way of checking if the sample is running and calling the RunPE injection function on it
The middle part of the code stores many more functions than the one employed by this sample; something that aligns with the crypter builder opt-in / opt-out behavior.
Between those functions, some that are not used at least in this sample, but interesting are:
PersistAutoinject
Binder
UACBypass
USBSpreader
AntiVM
PersistAutoInject
After some cleaning and functions rename, it is clearly visible how the payload is injected into RegAsm.exe.
Note that, $_cryptedfile stores the decrypted PE resource, which is the final payload carried by the crypter.
Binder
Depending on the arguments supplied to the function, the payload stored in the packer is appended to a clean file and afterwards started.
The merged files can be dropped into %temp%, %appadata% or the folder where the original file is running from.
UACBypass
Based on the OS version, two different types of UAC bypass tricks are run.
In case of Windows 7 or 8 via eventwer, and thanks to fodhelper for Windows 10.
USBSpreader
It goes by itself what if does
Removable devices enumeration
For every device, folders are discovered
For every file - without extension .pif- the payload is copied into the folder and renamed with the original file name, adding extension.pif.
The original file is deleted
AntiVM
It boils down to three registry checks
Generic one
VMware specific
VirtualBox specific
Reaching finally the top part of the code, few calls stands immediately out from the crowd and looks good candidates or at least building blocks, for RunPE and many other direct OS calls.
DllStructSetData
DllStructGetSize
DllStructGetPtr
DllStructGetData
DllStructCreate
DllCall
For what concerns the strings obfuscation, once the function is cleaned up it looks like this
Deobfuscation routine calls are scattered around the script, but meaning of the strings is anyhow intuitive.
RunPE - Process Hollowing
Once the carried payload is read and decrypted (AES256) with a hardcoded password available in the script, the next steps can be outlined like in the diagram.
Bear in mind that the UPX part is an on/off feature and might not be enabled for other samples.
The shellcode, below just a small snippet, is embedded in the script and has its own function here renamed as RunPE, the whole body is hidden away thanks to the string obfuscation function uxharcuawtv.
Once extracted and converted in a suitable format, the first shellcode instructions are walking the PEB to resolve kernel32 and ntdll base addresses, later used for the respective API calls.
The shellcode is also using a basic hashing function instead of storing strings of the respective Windows APIs.
The function in charge for computing the hash is located @ 0x00000092
The assembly snippet can be easily translated to python.
win_apis = [
"CreateProcessW",
"VirtualAllocEx",
"VirtualAlloc",
"WriteProcessMemory",
...
...
...
]
def build_hash(api):
mapping = map(ord, api)
uVar2 = 0
for i in mapping:
uVar2 = (uVar2 << 4) + i
if (uVar2 & 0xF0000000):
uVar2 = (uVar2 ^ (uVar2 & 0xf0000000) >> 0x18) & 0xfffffff
return hex(uVar2)
for win_api in win_apis:
print("{}\t{}".format(build_hash(win_api),win_api))
Once enumerated the exported functions from kernel32.dll and ntdll.dll, it becomes trivial to map the hash values found in the shellcode to their equivalent string versions - as seen at the beginning of the section.
Hash API DLL API
0x73c3a79 ntdll.dll memcpy
0xb8a4a79 ntdll.dll RtlZeroMemory
0xc8338ee ntdll.dll NtUnmapViewOfSection
0x1e16457 kernel32.dll CreateProcessW
0x8cae418 kernel32.dll VirtualAllocEx
0x3d8cae3 kernel32.dll VirtualAlloc
0x648b099 kernel32.dll WriteProcessMemory
0x394ba93 kernel32.dll TerminateProcess
0x4b9c7e4 kernel32.dll GetThreadContext
0x4b887e4 kernel32.dll SetThreadContext
0x1d72da9 kernel32.dll ReadProcessMemory
0xb3dd105 kernel32.dll VirtualFree
0xf232744 kernel32.dll ResumeThread
0xd186fe8 kernel32.dll VirtualProtectEx
At the end, calling ResumeThread, resumes the suspended process - now filled with the payload - and break free the carried malware.
Quickly checking results on the internet for similar shellcode wrappers, yielded almost an identical version, the same was released on a forum in 2016, by a user that goes by the moniker of Wardow.
One hypothesis, if the previous catch holds true, is that CypherIT’s devs copy-paste, part of the wrapper / shellcode and embedded it straightaway into the crypter.
II - CypherIT
Looking at CypherIT website, its easy at least for some of the functions, to map 1:1 the advertised features with the code just analyzed
Note: screenshots taken around April 2019
The packer has different price entries, that goes from 30 up to 300 Euro, a part from that, support is also offered - surprisingly - 24⁄7 + Discord group too.
I guess the good old Skype, ICQ and Jabber days are gone ;-)
III - MalSilo historical memory
MalSilo works with multiple backends, some for storing metadata and others for the samples - but there are more supporting different uses cases.
Since the focus until now was on the crypter and not on the dropped payload, I though it might be also interesting to investigate which other families were dispatched during the campaigns.
Due to MalSilo project nature it has obviously a limited view about threats around the world, but it can still provide some insights; with this in mind let’s first get an idea about how many samples passed through the system.
This can be easily achieved querying MISP (backend #2) and organizing in a chronological order every collected samples.
The chart below shows a total of 49 specimens, out of curiosity the “patient zero” previously analyzed was detected on the 2018-11-28.
Since the steps for tracking a malware family are kind of unique due to they way MalSilo works, there is no much to share with the community, but the main points are:
YARA rule is created for fingerprinting the crypter (note: at the time, the rule was covering only the version previously described, not the latest one)
Backend #1, where samples are stored, is queried with the new rule + custom script is executed
For every match, sample metadata is extracted from backend #2 (MISP)
All samples are unpack un bulk mode
For every sample
The hard-coded password for the encrypted payload is gathered
The carried payload is decrypted, extracted and saved to disk
Payloads are afterwards statically and dynamically fingerprinted. Dynamic checks are mandatory to overcome additional obfuscators or packers making static detection useless; the final results looks as follows.
PE resources
AutoIT provides an easy way to add custom resources to a file, this can be achieved via a specific User Defined Functions (UDF) known as #AutoIt3Wrapper_Res_File_Add.
The interesting part of this, is that it exposes the full path of the embedded resource, thus, exposing - for some samples - the Windows username of the threat actor crafting the payload.
Just keeping the first part of the Windows path, yields these names
c:\users\user
c:\users\robotmr
c:\users\lenovo pc
c:\users\hp
c:\users\bingoman-pc
w:\work\client and crypter
c:\users\administrator
c:\users\pondy
c:\users\peter kamau
Plotting the Windows paths taken from every payloads displays the following pattern
By looking at the graphs, it comes with no surprise that the packer was employed by off-the-shelf malware.
Out of curiosity, analyzing the crypters of the administrator user, which also crafted the payload investigated at the beginning of the article, it shows how all three samples were - most probably - generated by the same (old?) CypherIT version.
I say old, mostly because, if we look at the other samplings, its clearly visible how the overall script obfuscation technique was updated.
IV - Recent samples
The YARA rule originally developed reached its EOF - around 2019.02.11 - as soon as CypherIT received a major update.
Tearing a part a recent sample observed by MalSilo on the 2019.07.23 (7b252dbb6a39ea4814d29ebfd865f8a46a69dd9f) and also quickly skimming through some of the ones before (02.2019 - 07.2019) is clearly visible how the obfuscation technique - and not only - is more or less constantly updated.
From here we could start all over again :)
This last section will only investigate few functions, below some take aways.
At run, the first instruction to be executed is a loop, with nested ifstatements, that initializes a set of variables
A control variable, just set before the loop, decides which block should be jumped to next
Close to the end of every if block, thecontrol variableis re-initialized, defining the next jump to take
Once a code block is executed, a custom function is resolved and called
Afterwards, execution starts almost in the same way as the old version
Control Flow Flattening CFF , the steps just described, is employed almost in every functions, together with string obfuscation - this last one, sometimes with or without CFF.
When it comes to sandbox detection there are two new functions that comes into play
Detecting typical VM guest tools from, VirtualBox and VMware
Tracking mouse movement
AntiVM_moue_check snippet using CFF and strings obfuscation
AntiVM_moue_check after clean up
AntiVM_process_check
AntiVM_process_check after clean up
For what concerns the carried payload, latest versions stores it in multiple resources, under the RESOURCE_TYPE_FONTDIR, the function in charge of rebuilding it, is shown below.
The first function parameter, $data, takes in input a list of resources separated by | and rebuilds the final payload.
$data=X|USXlhrrTcD|JqLn|hEuiNUhgRzrxs|nyFoHiqBt|PJYZYBUO|ChaHOMZLQtIa|AFpMebeesFkYteWii|FCSQpnQ|BHxAiLvVjtJlwSKA
Nothing special to add about the shellcode, which for the few samples analyzed, remained the same.
Final thoughts
CypherIT did not perform any game changer new techniques; also after peeling all obfuscation layers, it keeps under the hood the same features observed back in 2018, adding to the portfolio new tricks and functionalities (not fully outlined in the last section).
Event if recent samples rely on CFF and new string obfuscation machination for slowing down analysis, some functions and related parameters are still talkative thanks to their naming convention.
If the earlier versions of the specimen were storing the payload only in one section of the PE file, latest ones split and store it in multiple resources, but in essence, the rebuilding technique stays the same as before.
In addition, since the code skeleton logic remained unchanged between updates, once new tricks are defeated, the analysis workflow can proceed more or less the same way as before.
Based on MalSilo telemetry data outlined in the article, it can be hypothesized that the crypter is mostly common among commodity malware of category stealers.
Appendix
ATT&CK Techniques
Tactic ID Name
Persistance T1053 Scheduled Task
Persistance T1158 Hide Files and Directories
Privilege escalation T1088 Bypass User Account Control
Defense evasion T1093 Process Hollowing
Defense evasion T1055 Process Injection
Defense evasion T1045 Software Packing
Defense evasion T1027 Obfuscation Files or Information
Defense evasion T1140 Deobfuscate/Decode Files or Information
YARA rule
rule cypherit_shellcode
{
meta:
author = "raw-data"
tlp = "white"
version = "1.0"
created = "2019-01-25"
modified = "2019-01-25"
description = "Detects CypherIT shellcode"
strings:
$win_api1 = { c7 8? ?? ?? ?? ?? ee 38 83 0c c7 8? ?? ?? ?? ?? 57 64 e1 01 c7 8? ?? ?? ?? ?? 18 e4 ca 08 }
$win_api2 = { c7 8? ?? ?? ?? ?? e3 ca d8 03 c7 8? ?? ?? ?? ?? 99 b0 48 06 }
$hashing_function = { 85 c9 74 20 0f be 07 c1 e6 04 03 f0 8b c6 25 00 00 00 f0 74 0b c1 e8 18 33 f0 81 e6 ff ff ff 0f 47 49 }
condition:
(1 of ($win_api*)) and $hashing_function
}
MISP event
CypherIT samples and payload [08.2018 - 02.2019]
IOCs - 2019.07.23
Collection date Crypter (sha1) Drop site
2019-07-23 7b252dbb6a39ea4814d29ebfd865f8a46a69dd9f hXXp://mimiplace[.]top/invoice.exe
IOCs [08.2018 - 02.2019]
49 drop-sites and 44 unique samples
Collection date Crypter (sha1) Drop site
2018-08-28 bdf0f4184794a4e997004beefde7a29066e47847 hXXp://com2c.com[.]au/filehome/4hih
2018-09-03 55646431095967fc5d41d239de70a8ffbd8d0833 hXXp://service-information-fimance[.]bid/Java.exe
2018-09-03 823e2d3ef005d36b1401472dd5fd687a652b81b0 hXXp://service-information-fimance[.]bid/NETFramework.exe
2018-09-03 ec33f922fb324d7d2d4ee567ceba4563c6700661 hXXp://service-information-fimance[.]bid/AMADEUSapp.exe
2018-09-04 aed69a2e740e789139118e3753107f9d892790c7 hXXp://letmeplaywithyou[.]com/grace/bless.exe
2018-09-05 4f193f9724f8b37fe998f5d159ece8528f608fa9 hXXps://a.doko[.]moe/izgvrd
2018-09-05 aed69a2e740e789139118e3753107f9d892790c7 hXXps://letmeplaywithyou[.]com/grace/bless.exe
2018-09-07 ddf3a42a85fb4ae2fe5c86e7305265e8c46e52a9 hXXp://bit[.]ly/2Q6hlGD
2018-09-07 ddf3a42a85fb4ae2fe5c86e7305265e8c46e52a9 hXXps://b.coka[.]la/sxPC9O.jpg
2018-09-17 a81bc74373b5d948472e089877fa3b64c83c4fda hXXps://a.doko[.]moe/hpofbv
2018-09-19 e82e4a048cc66dfee9979a2db70e63b60a6aa3cb hXXp://lse-my[.]asia/servfbtmi.exe
2018-09-19 3fae31b10d8154edd1bfcca1c98cc3f61a78fdac hXXp://thepandasparadise[.]com/cts/dfgf/ExceI_Protected.exe
2018-09-19 5387c0ead3450eaef1cc82e4c4a0b52982fb2952 hXXp://thepandasparadise[.]com/cts/dfgf/dfdgfh/server_Pro.exe
2018-09-19 50a250aeb3c685e04cd2fce62634d1b95920cbab hXXp://scientificwebs[.]com/1.exe
2018-09-19 65b0e55170715d14ed139a7e1cd1710685e19a7d hXXps://scientificwebs[.]com/1.exe
2018-09-19 686e7c7e4767cc7198c71bc99596c226fbf1ab36 hXXp://thepandasparadise[.]com/cts/dfgf/win32_Pro.exe
2018-09-19 05fec020078b53643cb14a8ec7db3f2aa131e572 hXXp://thepandasparadise[.]com/cts/dfgf/dfdgfh/win32_Pro.exe
2018-09-19 1adc1f7d7fd258a75c835efd1210aa2e159636aa hXXp://thepandasparadise[.]com/cts/ExceI_Protected.exe
2018-09-19 bbcd6a0a7f73ec06d2fab527f418fca6d05af3a6 hXXp://lse-my[.]asia/dotvmptee.exe
2018-09-19 a5944808c302944b5906d892a1fd77adaf4a309c hXXp://thepandasparadise[.]com/cts/dfgf/dfdgfh/fgbh/server_Pro.exe
2018-09-19 90d6f6bb6879862cb8d8da90c99cb764f064bc5a hXXp://thepandasparadise[.]com/cts/dfgf/winRAR1.exe
2018-09-20 50a250aeb3c685e04cd2fce62634d1b95920cbab hXXps://scientificwebs[.]com/1.exe
2018-09-20 a9856ca5ecba168cc5ebe39c3a04cb0c0b432466 hXXp://scientificwebs[.]com/1.exe
2018-09-21 7d2cddf145941456c7f89eb0ecbbaabb1eb4ef0a hXXps://b.coka[.]la/E5CoMb.jpg
2018-09-21 767945f40c2de439c5107456e34f014149da16e6 hXXp://lse-my[.]asia/servfbtmi.exe
2018-09-21 6a41eb6dbfe98444f126d42b1b5818767ced508d hXXp://lse-my[.]asia/servfbtmi.exe
2018-09-21 a9856ca5ecba168cc5ebe39c3a04cb0c0b432466 hXXps://scientificwebs[.]com/1.exe
2018-09-25 6e75dc48ec0380e189f67ba7f61aff99f5d68a04 hXXp://b.coka[.]la/sMZD0n.jpg
2018-09-25 3e5bef4eaf3975de6573a2fd22ce66ad6c88c652 hXXps://b.coka[.]la/E19F0D.jpg
2018-09-25 50c2b8ac2d8f04a172538abaa00bcb5dc135bb12 hXXp://b.coka[.]la/ZKW6B.jpg
2018-09-27 96136f00f59a44c2bce10755bcced5c362868766 hXXp://lse-my[.]asia/stbincrp.exe
2018-09-27 b199f2488849b3bcad92e85190d2525422b1a644 hXXps://share.dmca[.]gripe/FxJ0r9YOSecgw9FP
2018-09-28 8973591f584f2b104559cc5bc73838ff0df0e50f hXXp://lse-my[.]asia/injclientcrp.exe
2018-09-28 03469616ce1ce960edbc6be814ca3ab86902067d hXXp://lse-my[.]asia/stbincrp.exe
2018-09-28 2ef17bc8b67f17fb5957c8edc969aa5bdcc1c76e hXXp://lse-my[.]asia/goosmi.exe
2018-09-28 6bdad0aae469313a8567ad1138a744dca74f1ecc hXXp://lse-my[.]asia/pacbellcrp.exe
2018-11-08 b12dda5c58fd6f6c83921007e14984f29f85f768 hXXp://77.73.68[.]110/ftp92131/nj2.dat
2018-11-08 ab2047929be29957da14dc9114009b149fd8c6b2 hXXp://77.73.68[.]110/bullet967/ORDER883847777384pdf.exe
2018-11-08 8deb9352d11ed1057209fc572f401d83ad548b27 hXXp://77.73.68[.]110/ftp92131/q2.dat
2018-11-08 011e58cee3757035ca531b85b9cb9e3680a73ed5 hXXp://77.73.68[.]110/ftp92131/q1.dat
2018-11-08 0fd9c8c5c1275a0b9c6418bf55fe48bff4c56264 hXXps://e.coka[.]la/g3iTRU
2018-11-08 06f327a1e115f3e1b42ffbcedc235d9c2f8a7811 hXXp://77.73.68[.]110/ftp92131/nj1.dat
2018-11-10 a293b6868a0b82621e94be1266d09c49f1ff7e0b hXXps://s3.us-east-2.amazonaws[.]com/qued/faxbyjeny33.exe
2018-11-28 4448b907f2c9f9ee8e4b13e4b6c292eef4b70930 hXXps://f.coka[.]la/K2bkm.jpg
2018-11-30 1625aa77ed24ed9d052a0153e1939b5a32b352ed hXXps://e.coka[.]la/GRVzbl.jpg
2018-12-07 43fd77d2401618f8cc0a7ae63bc6bd5e52630498 hXXps://doc-00-5k-docs.googleusercontent[.]com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/rbdpoatvh5pc64k1st3d1atb7tcurkfh/1544212800000/11570855783461912856/*/15nlC5g9fvaX4VvpyZY-0L_HaSf5BpBaI?e=download
2019-02-11 6e75dc48ec0380e189f67ba7f61aff99f5d68a04 hXXps://b.coka[.]la/sMZD0n.jpg
2019-02-11 4c098028fa92129f9a40fb5f7fa3f3e60f9e2885 hXXps://b.coka[.]la/KMjalT.jpg
2019-02-11 68dcb96a0f096dc9846bf8bd3a41eb6b1fc764b2 hXXps://e.coka[.]la/BGZeW
Mapping between crypter and delivered payload
Crypter (sha1) Payload (sha1) Malware family payload
3fae31b10d8154edd1bfcca1c98cc3f61a78fdac b5d8fbe61e16c7d41d1d2b8ecb05db3f26328bad Generic-VBA-Injector
ec33f922fb324d7d2d4ee567ceba4563c6700661 b3b657d98212f654d787958940f2a9d47bfbea7e CyberGate/Rebhip
2ef17bc8b67f17fb5957c8edc969aa5bdcc1c76e 0ce26d4c9785c0bcdb617eaa5e5112f61704f00e Formbook
6a41eb6dbfe98444f126d42b1b5818767ced508d 035fa7cf96bf30c6f0aae990d9b03123a8d9147e Formbook
a9856ca5ecba168cc5ebe39c3a04cb0c0b432466 4fa9093716ae217a7c584d5fec6451284f99ae34 AgentTesla
50a250aeb3c685e04cd2fce62634d1b95920cbab 4fa9093716ae217a7c584d5fec6451284f99ae34 AgentTesla
68dcb96a0f096dc9846bf8bd3a41eb6b1fc764b2 7448566a87b0037c4826902353fffb5f572f7eae Remcos
55646431095967fc5d41d239de70a8ffbd8d0833 283d515db413c371d956568a2c80a18a2c6cff25 NanoCore
65b0e55170715d14ed139a7e1cd1710685e19a7d 4fa9093716ae217a7c584d5fec6451284f99ae34 AgentTesla
1625aa77ed24ed9d052a0153e1939b5a32b352ed 9ebef7e7a264cba868b0faeb7f34f5a5417cea36 Remcos
b199f2488849b3bcad92e85190d2525422b1a644 397f6f2bf9d5498c215662c164fe05f8090272cf Remcos
4f193f9724f8b37fe998f5d159ece8528f608fa9 f023cb03312770264fc71716c343a7f99ba77b37 AgentTesla
8deb9352d11ed1057209fc572f401d83ad548b27 2af16eb4711043e520369a3f27c97a80094df6ce QuasarRAT
5387c0ead3450eaef1cc82e4c4a0b52982fb2952 eaa912026092a81b42cfe1c51eba01132a051dd3 Generic-VBA-Injector
8973591f584f2b104559cc5bc73838ff0df0e50f b552cbb2b1a536ae1aa97dcdb68270036126931e Formbook
3e5bef4eaf3975de6573a2fd22ce66ad6c88c652 405dd0cf8527da5c586fa26b66ddcfad39febd61 AgentTesla
4448b907f2c9f9ee8e4b13e4b6c292eef4b70930 587bb64894c3bc5e46cfda3b777224f88a0b17f9 DarkComet
e82e4a048cc66dfee9979a2db70e63b60a6aa3cb 035fa7cf96bf30c6f0aae990d9b03123a8d9147e Formbook
0fd9c8c5c1275a0b9c6418bf55fe48bff4c56264 9567a636928edfa5c22d4c5fa761c38bcc6823a9 Remcos
4c098028fa92129f9a40fb5f7fa3f3e60f9e2885 af6cab774984d53451609bd26088309172737f89 AgentTesla
ab2047929be29957da14dc9114009b149fd8c6b2 8600cfa7fab36533ca02215202aefd7c68ecba9b Imminent
6e75dc48ec0380e189f67ba7f61aff99f5d68a04 f59755e9fa01362a9bc63f3e8da944eb3d3da3c4 AgentTesla
ddf3a42a85fb4ae2fe5c86e7305265e8c46e52a9 405dd0cf8527da5c586fa26b66ddcfad39febd61 AgentTesla
90d6f6bb6879862cb8d8da90c99cb764f064bc5a eaa912026092a81b42cfe1c51eba01132a051dd3 Generic-VBA-Injector
a293b6868a0b82621e94be1266d09c49f1ff7e0b cb24de30895442cf327d3947edd56be6503e2b13 Imminent
bdf0f4184794a4e997004beefde7a29066e47847 f023cb03312770264fc71716c343a7f99ba77b37 AgentTesla
b12dda5c58fd6f6c83921007e14984f29f85f768 0eeca43abeced0650d941ad8515bd744fa4176ed NjRAT
a5944808c302944b5906d892a1fd77adaf4a309c eaa912026092a81b42cfe1c51eba01132a051dd3 Generic-VBA-Injector
bbcd6a0a7f73ec06d2fab527f418fca6d05af3a6 17159c39c4ee765291035bbf5687dafeeb1bd380 Formbook
1adc1f7d7fd258a75c835efd1210aa2e159636aa b5d8fbe61e16c7d41d1d2b8ecb05db3f26328bad Generic-VBA-Injector
011e58cee3757035ca531b85b9cb9e3680a73ed5 6cd247e6d37d43a64741cb1e57efba96785d4c84 QuasarRAT
50c2b8ac2d8f04a172538abaa00bcb5dc135bb12 f13046e41b10d376a938cf60b29943459b58ee8a AgentTesla
6bdad0aae469313a8567ad1138a744dca74f1ecc d4ddf2da16dc503c3d14178caa84f993467e3fcd Formbook
823e2d3ef005d36b1401472dd5fd687a652b81b0 a2370c663e234d0f6a8a96c74cc7b4a28bdbcc71 Imminent
7d2cddf145941456c7f89eb0ecbbaabb1eb4ef0a 405dd0cf8527da5c586fa26b66ddcfad39febd61 AgentTesla
686e7c7e4767cc7198c71bc99596c226fbf1ab36 0de1d77d61f8a0132f1b6663351023e6b485615f NanoCore
03469616ce1ce960edbc6be814ca3ab86902067d 98f113a9d54688f7eec645855057a0910f1ebbf6 Azorult
aed69a2e740e789139118e3753107f9d892790c7 d5c7f3642d61a5297536e9aa0c4c3af9099cb247 Andromeda
05fec020078b53643cb14a8ec7db3f2aa131e572 0de1d77d61f8a0132f1b6663351023e6b485615f NanoCore
43fd77d2401618f8cc0a7ae63bc6bd5e52630498 f8c78342b9585588ec7a028e9581a93aeacb9747 NjRAT
06f327a1e115f3e1b42ffbcedc235d9c2f8a7811 945cdc67c1eb8fc027475a193ba206cf7ecd40b4 NjRAT
a81bc74373b5d948472e089877fa3b64c83c4fda 4d458a0b27e58ab7cf930c2ff55bfe4f083aa52d Remcos
Mapping between Windows user path and delivered payload
Crypter (sha1) Payload resource location Malware family payload
3fae31b10d8154edd1bfcca1c98cc3f61a78fdac c:\users\user\desktop\update\kuppq\yrjuhhjhai Generic-VBA-Injector
ec33f922fb324d7d2d4ee567ceba4563c6700661 c:\users\robotmr\desktop\cipherit\sirlv\orzrfpubgq CyberGate/Rebhip
2ef17bc8b67f17fb5957c8edc969aa5bdcc1c76e c:\users\user\desktop\cypherit\kvmsr\sylgrnaoja Formbook
6a41eb6dbfe98444f126d42b1b5818767ced508d c:\users\user\desktop\cypherit\kxiav\vyuvenhftx Formbook
a9856ca5ecba168cc5ebe39c3a04cb0c0b432466 c:\users\lenovo pc\documents\cypherit\dbcfr\evzthiwzwv AgentTesla
50a250aeb3c685e04cd2fce62634d1b95920cbab c:\users\lenovo pc\documents\cypherit\afwuh\pakfxtjjtr AgentTesla
68dcb96a0f096dc9846bf8bd3a41eb6b1fc764b2 c:\users\hp\desktop\cypherit\feung\aegmywmuhw Remcos
55646431095967fc5d41d239de70a8ffbd8d0833 c:\users\robotmr\desktop\cipherit\jkvkh\povdgmqwwf NanoCore
65b0e55170715d14ed139a7e1cd1710685e19a7d c:\users\lenovo pc\documents\cypherit\tsdsn\owkywxlpuo AgentTesla
1625aa77ed24ed9d052a0153e1939b5a32b352ed c:\users\hp\desktop\cypherit\jmtby\vtlfqlhyjd Remcos
b199f2488849b3bcad92e85190d2525422b1a644 c:\users\hp\desktop\cypherit\zfmal\tenkocitdk Remcos
4f193f9724f8b37fe998f5d159ece8528f608fa9 c:\users\bingoman-pc\downloads\update\skcha\zaqwzrhyrj AgentTesla
8deb9352d11ed1057209fc572f401d83ad548b27 w:\work\client and crypter\crypters\cypherit\upjzw\crccbblvqr QuasarRAT
5387c0ead3450eaef1cc82e4c4a0b52982fb2952 c:\users\user\desktop\update\sldmr\sjmaqkecsw Generic-VBA-Injector
8973591f584f2b104559cc5bc73838ff0df0e50f c:\users\user\desktop\cypherit\cnbvs\detwldamdu Formbook
3e5bef4eaf3975de6573a2fd22ce66ad6c88c652 c:\users\bingoman-pc\downloads\update\qdpsr\xgvtxjugvj AgentTesla
4448b907f2c9f9ee8e4b13e4b6c292eef4b70930 c:\users\administrator\desktop\cypherit\fvtit\lhoqctjmjo DarkComet
e82e4a048cc66dfee9979a2db70e63b60a6aa3cb c:\users\user\desktop\cypherit\nnxsi\cuzgxaenow Formbook
0fd9c8c5c1275a0b9c6418bf55fe48bff4c56264 c:\users\hp\desktop\cypherit\tjkxp\hoilettosu Remcos
4c098028fa92129f9a40fb5f7fa3f3e60f9e2885 c:\users\bingoman-pc\downloads\update\blcmk\mspmisscix AgentTesla
ab2047929be29957da14dc9114009b149fd8c6b2 c:\users\pondy\desktop\cypherit\jfdvd\bhbwgnhlpt Imminent
6e75dc48ec0380e189f67ba7f61aff99f5d68a04 c:\users\bingoman-pc\downloads\update\qdpsr\nsskftnenb AgentTesla
ddf3a42a85fb4ae2fe5c86e7305265e8c46e52a9 c:\users\bingoman-pc\downloads\update\skcha\iljxbfjwjz AgentTesla
90d6f6bb6879862cb8d8da90c99cb764f064bc5a c:\users\user\desktop\update\usbad\lrmrmblvxz Generic-VBA-Injector
a293b6868a0b82621e94be1266d09c49f1ff7e0b c:\users\administrator\desktop\cypherit\zajlq\torabywgww Imminent
bdf0f4184794a4e997004beefde7a29066e47847 c:\users\bingoman-pc\downloads\update\xmhea\fezrqknxti AgentTesla
b12dda5c58fd6f6c83921007e14984f29f85f768 w:\work\client and crypter\crypters\cypherit\upjzw\fuwikvopxn NjRAT
a5944808c302944b5906d892a1fd77adaf4a309c c:\users\user\desktop\update\rotre\xmdklwilso Generic-VBA-Injector
bbcd6a0a7f73ec06d2fab527f418fca6d05af3a6 c:\users\user\desktop\cypherit\nnxsi\mswdkgevxa Formbook
1adc1f7d7fd258a75c835efd1210aa2e159636aa c:\users\user\desktop\update\zoddm\ocidiuboxz Generic-VBA-Injector
011e58cee3757035ca531b85b9cb9e3680a73ed5 w:\work\client and crypter\crypters\cypherit\upjzw\hmalcrldse QuasarRAT
50c2b8ac2d8f04a172538abaa00bcb5dc135bb12 c:\users\bingoman-pc\downloads\update\nhupn\lbddirvobk AgentTesla
6bdad0aae469313a8567ad1138a744dca74f1ecc c:\users\user\desktop\cypherit\cnbvs\utrjwnkjsd Formbook
823e2d3ef005d36b1401472dd5fd687a652b81b0 c:\users\robotmr\desktop\cipherit\xiwpy\ztcitplyof Imminent
7d2cddf145941456c7f89eb0ecbbaabb1eb4ef0a c:\users\bingoman-pc\downloads\update\roqli\kfkldukcus AgentTesla
686e7c7e4767cc7198c71bc99596c226fbf1ab36 c:\users\user\desktop\update\fteyl\enudngaemy NanoCore
03469616ce1ce960edbc6be814ca3ab86902067d c:\users\user\desktop\cypherit\cnbvs\keafqrogtw Azorult
aed69a2e740e789139118e3753107f9d892790c7 c:\users\administrator\desktop\cyperit\hkmzn\hwgracgvjt Andromeda
05fec020078b53643cb14a8ec7db3f2aa131e572 c:\users\user\desktop\update\drweo\distzbbkvx NanoCore
43fd77d2401618f8cc0a7ae63bc6bd5e52630498 c:\users\peter kamau\desktop\cypher\dzncz\gxrqkcdtpc NjRAT
06f327a1e115f3e1b42ffbcedc235d9c2f8a7811 w:\work\client and crypter\crypters\cypherit\upjzw\gkviexpzpl NjRAT
a81bc74373b5d948472e089877fa3b64c83c4fda c:\users\hp\desktop\cypherit\hxnfr\gzlyqzcgpj Remcos
|
Estoy tratando de subir con un algoritmo para recorrer todas las rutas posibles de un trinomio de árbol y estoy teniendo dificultades viene con uno. Hay alguna literatura sobre este o tiene cualquier otra persona por algo similar?
Para ser específicos, estoy tratando de calcular el P&L de operaciones bursátiles desde el nodo 0 para cada posible ruta final.
EDIT: permítanme aclarar - para un 1 paso trinomio árbol hay tres caminos: 1 (arriba) 0 (medio) o -1 (abajo). La adición de otro paso hace que el árbol tiene 5 puntos finales y 9 rutas de acceso: 11 (arriba,arriba) 10 (arriba,a mediados de) 01 (mediados de, arriba) 1-1 (arriba,abajo) 00 (mid,mid) -11 (abajo,arriba) 0-1 (m,d) -10 (d,m) -1-1 (d,d)
Soy consciente de que el número total de caminos que terminan en un punto dado es dado por Pascal del tetraedro, pero no sé cómo venir para arriba con todas las rutas de acceso para un arbitrario (n-paso) del árbol. Así que tengo el 1,-1,1,-1,0 secuencia que en este caso sería de final en el medio de nodo de un 5 paso árbol.
Estoy tratando de subir con un algoritmo para recorrer todas las rutas posibles de un trinomio de árbol y estoy teniendo dificultades viene con uno. Hay alguna literatura sobre este o tiene cualquier otra persona por algo similar?
Respuestas
¿Demasiados anuncios?
Si he entendido correctamente, he aquí un ejemplo en Python. Cualquier otro lenguaje de programación debe ser capaz de hacer algo similar:
def search(path):
if len(path) == 5:
if sum(path) == 0:
print path
else:
search(path+[1])
search(path+[0])
search(path+[-1])
Simplemente invocar con un array vacío:
>> search([])
[1, 1, 0, -1, -1]
[1, 1, -1, 0, -1]
[1, 1, -1, -1, 0]
[1, 0, 1, -1, -1]
[1, 0, 0, 0, -1]
[1, 0, 0, -1, 0]
[1, 0, -1, 1, -1]
[1, 0, -1, 0, 0]
[1, 0, -1, -1, 1]
[1, -1, 1, 0, -1]
[1, -1, 1, -1, 0]
[1, -1, 0, 1, -1]
[1, -1, 0, 0, 0]
[1, -1, 0, -1, 1]
[1, -1, -1, 1, 0]
[1, -1, -1, 0, 1]
[0, 1, 1, -1, -1]
[0, 1, 0, 0, -1]
[0, 1, 0, -1, 0]
[0, 1, -1, 1, -1]
[0, 1, -1, 0, 0]
[0, 1, -1, -1, 1]
[0, 0, 1, 0, -1]
[0, 0, 1, -1, 0]
[0, 0, 0, 1, -1]
[0, 0, 0, 0, 0]
[0, 0, 0, -1, 1]
[0, 0, -1, 1, 0]
[0, 0, -1, 0, 1]
[0, -1, 1, 1, -1]
[0, -1, 1, 0, 0]
[0, -1, 1, -1, 1]
[0, -1, 0, 1, 0]
[0, -1, 0, 0, 1]
[0, -1, -1, 1, 1]
[-1, 1, 1, 0, -1]
[-1, 1, 1, -1, 0]
[-1, 1, 0, 1, -1]
[-1, 1, 0, 0, 0]
[-1, 1, 0, -1, 1]
[-1, 1, -1, 1, 0]
[-1, 1, -1, 0, 1]
[-1, 0, 1, 1, -1]
[-1, 0, 1, 0, 0]
[-1, 0, 1, -1, 1]
[-1, 0, 0, 1, 0]
[-1, 0, 0, 0, 1]
[-1, 0, -1, 1, 1]
[-1, -1, 1, 1, 0]
[-1, -1, 1, 0, 1]
[-1, -1, 0, 1, 1]
|
Assignment 4
Assignment 4 has two purposes:
To give you more experience with linear regression
To introduce simulation as a means of studying the behavior of statistical techniques
This assignment is due October 25, 2020 at the end of the day.
Note
In this writeup, I will use capital letters (e.g. \(X\)) for random variables, and lowercase variables (e.g. \(x\)) for individual samples (or draws) from those random variables.
You will find the probability notes useful for understanding the derivations in this assignment.
Revision Log
October 19, 2020
Added clarifying notes to Warmup.
Simulation
One common way to understand the behavior of statistical techniques is to use simulation (often called Monte Carlo simulation).In a simulation, we use a psuedorandom number generator to make up data that follows particular patterns (or lack of patterns).We call this data synthetic data.
We then apply a statistical technique, such as correlation coefficient or a linear regression, to the synthetic data, and see how closely its results match the parameters we put in to our simulation.If the analysis reliably estimates the simulation's parameters, we say it recovers the parameters.We can do this many times to estimate that reliability — we can run the simulation 1000 times, for example, and examine the distribution of the error of its parameter estimates to see if it is unbiased, and how broad the errors are.
This technique is commonly used in statistics research (that is, research about statistics itself, rather than research that uses statistics to study other topics) in order to examine the behavior of statistical methods.By simulating samples of different sizes from a population with known parameters, we can compare the results of analyzing those samples with the actual values the statistical method is supposed to estimate.Further, by mapping its behavior over a range of scenarios, we can gain insight into what a statistical technique is likely doing with the particular data we have in front of us.
This is distinct from bootstrapping. In bootstrapping, we are resampling our sample to try to estimate the sampling distribution of a statistic with respect to the population our sample was drawn from; we have actual data, but do not know the actual population parameters. In simulation, we know the population parameters, and do not have any actual data because we make it all up with the random number generator.
Generating Random Numbers
NumPy's Generator class is the starting point for generating random numbers.It has methods for generating numbers from a range of distributions.For more sophisticated distributions, the various distributions in the scipy.stats also support random draws.
Random number generators have a seed that is the starting point for picking numbers.Two identical generators with the same seed will produce the same sequence of values.
We can create a generator with np.random.default_rng:
rng = np.random.default_rng(20201014)
In my class examples, I have been using the current date as my seed. If you do not specify a seed, it will pick a fresh one every time you start the program; for reproducibility, it is advised to pick a seed for any particular analysis. It's also useful to re-run the analysis with a different seed and double-check that none of the conclusions changed.
We can then use the random number generator to generate random numbers from various distributions.It's important to note that random does not mean uniform — then uniform distribution is just one kind of random distribution.
For example, we can draw 100 samples from the standard normal distribution (\(\mu = 0\), \(\sigma = 1\)):
xs = rng.standard_normal(100)
Warmup: Correlation (10%)
If two variables are independent, their correlation should be zero, right? We can simulate this by drawing two arrays of 100 standard normal variables each, and computing their correlation coefficient:
xs = pd.Series(rng.standard_normal(100)) ys = pd.Series(rng.standard_normal(100)) xs.corr(ys)
Note
This code takes 100 draws (or samples) from the standard normal, twice (once for xs and again for ys).
Mathematically, we write this using \(\sim\) as the operator “drawn from”:
Run 1000 iterations of this simulation to compute 1000 correlation coefficients. What is the mean and variance of these simulated coefficients? Plot their distribution. Are the results what you expect for computing correlations of uncorrelated variables?
Tip
What you need to do for this is to run the code example above — that computes the correlation between two 100-item samples — one thousand times.This will draw a total of 200,000 numbers (100 each for x and y, in each simulation iteration).
Repeat the previous simulation, but using 1000 draws per iteration instead of 100. How does this change the mean and variance of the resulting coefficients?
Tip
Now you need to modify the code to draw 1000 normals for x and 1000 normals for y in each iteration.Remember that the example above is drawing 100 normals for each variable.
Remember the covariance of two variables is defined as:
And the correlation is:
If we want to generate correlated variables, we can do so by combining two random variables to form a third:
We can draw them with:
xs = pd.Series(rng.standard_normal(100)) ys = pd.Series(rng.standard_normal(100)) zs = xs + ys
With these variables, we have:
This last identity is from a property called linearity of expectation.We can now determine the covariance between \(X\) and \(Z\).As a preliminary, since \(X\) and \(Y\) are independent, their covariance \(\Cov(X, Y) = 0\).Further, their independence implies that \(\E[X Y] = \E[X]\E[Y]\), which from the equations above is \(0\).
With that:
The correlation coefficient depends on \(\Cov(X,Z) = 1\), \(\Var(X) = 1\), and \(\Var(Z)\). We can derive \(\Var(Y)\) as follows:
Therefore we have \(\sigma_X = 1\) (from its distribution), and \(\sigma_Z = \sqrt{2}\), and so the correlation
Covariance
You can compute the covariance with the Pandas .cov method.It's instructive to also plot that!
Run 1000 iterations simulating these correlated variables to compute 1000 correlation coefficients (xs.corr(zs)). Compute the mean and variance of these coefficients, and plot their distributions. Does this match what we expect from the analytic results? What happens when we compute correlations of 1000-element arrays in each iteration? What about 10000-element arrays?
Linear Regression (40%)
If we want to simulate a single-variable linear regression:
there are four things we need to control:
the distribution of \(x\)
the intercept \(\alpha\)
the slope \(\beta\)
the variance of errors \(\sigma_\epsilon^2\)
Remember that the linear regression model assumes errors are i.i.d. normal, and the OLS model will result in a mean error of 0; thus we have \(\epsilon \sim \mathrm{Normal}(0, \sigma_\epsilon)\). Sampling data for this model involves the following steps:
Sample \(x\)
Sample \(\epsilon\)
Compute \(y = \alpha + \beta x + \epsilon\)
Let's start with a very simple example: \(x\) is drawn from a standard normal, \(\alpha=0\), \(\beta=1\), and \(\sigma_\epsilon^2 = 1\).
xs = rng.standard_normal(1000) errs = rng.standard_normal(1000) ys = 0 + 1 * xs + errs data = pd.DataFrame({ 'X': xs, 'Y': ys })
Fit a linear model to this data, predicting \(Y\) with \(X\). What are the intercept and slope? What is \(R^2\)? Are these values what you expect? Plot residuals vs. fitted and a Q-Q plot of residuals to check the model assumptions - do they hold?
Repeat the simulation 1000 times, fitting a linear model each time. Show the mean, variance, and a distribution plot of the intercept, slope, and \(R^2\).
Extracting Parameters
The RegressionResults class returned by .fit() contains the model parameters. The .params field has the coefficients (including intercept), and .rsquared has the \(R^2\) value:
fit.params['X']
Fit a model to data with \(\alpha=1\) and \(\beta=4\). Are the resulting model parameters what you expect? How did \(R^2\) change, and why? Do the linear model assumptions still hold? What are the distributions of the slope, intercept, and \(R^2\) if you do this 1000 times?
Nonlinear Data (15%)
Generate 1000 data points with the following distributions and formula:
Fit a linear model predicting \(y\) with \(x\). How well does the model fit? Do the assumptions hold?
Draw a scatter plot of \(x\) and \(y\).
Drawing Normals
You can draw from \(\mathrm{Normal}(0, 5)\) either by using the normal method of Generator, or by drawing an array of standard normals and multiplying it by 5.
Tip
The NumPy function np.exp computes \(e^x\).
Repeat with \(y = -2 + 3 x^3 + \epsilon\)
Non-Normal Covariates (15%)
Generate 1000 data points with the model:
Plot the distributions of \(X\) and \(Y\)
Fit a linear model predicting \(y\) with \(x\)
How well does this model fit? Do the assumptions hold?
Gamma Distributions
You can draw 1000 from the \(Gamma(2, 1)\) by:
rng.gamma(2, 1, 1000)
Multiple Regression (10%)
Now we're going to look at regression with two or more independent variables.
We will use the following data generating process:
Tip
To draw from \(\mathrm{Normal}(\mu, \sigma)\), you can draw xs from a standard normal and compute xs * σ + μ.
Fit a linear model y ~ x1 + x2 on 1000 data points drawn from this model. What are the intercept and coefficients from the model? Are they what you expect?Check the model assumptions — do they hold?
Note
You can draw both \(x_1\) and \(x_2\) simultaneously with:
xs = rng.multivariate_normal([10, -2], [[2, 0], [0, 5]], 100) # turn into a data frame xdf = pd.DataFrame(xs, columns=['X1', 'X2'])
The multivariate normal distribution is parameterized by a list (or array) of means, and a positive symmetric covariance matrix defined as follows:
That is, the diagonals of them matrix are the variances of the individual variables, and the other cells are the covariances between pairs of variables. The example code sets up the following matrix:
Correlated Predictors (10%)
Now we're going to see what happens when we have correlated predictor variables. Remember I said those were a problem?
We're going to use the multivariate normal from the hint in the previous part to draw correlated variables \(X_1\) and \(X_2\) to use as predictors.We will use the following procedure:
Draw 1000 samples of variables \(X_1\) and \(X_2\) from a normal with means \(\langle 1, 3 \rangle\), variances of 1, and a covariance \(\Cov(X_1, X_2) = 0.85\):
xs = rng.multivariate_normal([1, 3], [[1, 0.85], [0.85, 1]], 1000)
Draw \(\epsilon \sim \mathrm{Normal}(0, 2)\)
Compute \(y = 3 + 2 x_1 + 3 x_2 + \epsilon\)
Show a pairplot of our variables \(X_1\), \(X_2\), and \(Y\). What do we see about their distributions and relationships?
Fit a linear regression for y ~ x1 + x2. How well does it fit? Do its assumptions hold?
Run this simulation (drawing 1000 variables and fitting a linear model) 100 times. Show the mean, variance, and distribution plots of the estimated intercepts and coefficients (for x1 and x2).
Repeat the repeated simulation for a variety of different covariances from 0 to 1 (including at least 0, 1, 0.9, and 0.99).Create line plots (or a single plot with multiple colors) that show how the variance of the estimated regression parameters (intercept and \(x_1\) and \(x_2\) coefficients) change as you increase the correlation (covariance) between \(X_1\) and \(X_2\).
Reflection (10%)
Write a couple of paragraphs about what you learned from this assignment.
Expected Time
As in A3, I'm providing here some estimates of how long I expect each part might take you.
Warmup: 1 hour
Linear Regression: 2 hours
Nonlinear Data: 30 minutes
Non-normal Covariates: 30 minutes
Multiple Regression: 1 hour
Correlated Predictors: 2 hours
Reflection: 1 hour
|
Hello!
I used to draw a rectangle of a certain color and thickness in PDF Viewer using annotations (with the JavaScript function addAnnot()).
Could I simply draw a rectangle with any function of the PDF-Tools Library or should I also create an Annotation with the PXCp_Add3DAnnotationW() function? The problem is I'm trying to use only the PDF-Tools in order to manipulate a PDF-Document.
Thanks for any answers!
I used to draw a rectangle of a certain color and thickness in PDF Viewer using annotations (with the JavaScript function addAnnot()).
Could I simply draw a rectangle with any function of the PDF-Tools Library or should I also create an Annotation with the PXCp_Add3DAnnotationW() function? The problem is I'm trying to use only the PDF-Tools in order to manipulate a PDF-Document.
Thanks for any answers!
Hi!
I have some questions on the PXCp_AddLineAnnotationW function:
1. The last parameter of the function is a pointer to a PXC_CommonAnnotInfo structure. It expects among other things an integer value for a color. Here is an extract from the docs with a pseudo code: I couldn't find any equivalent of that function in WPF. How is that value calculated?
This is an extract from my code:I didn't define the values for AnnotInfo.m_Border.m_DashArray. No annotation is created. I tested it with JavaScript command this.getAnnots(0); It returns null.
Thanks!
I have some questions on the PXCp_AddLineAnnotationW function:
1. The last parameter of the function is a pointer to a PXC_CommonAnnotInfo structure. It expects among other things an integer value for a color. Here is an extract from the docs with a pseudo code:
Code: Select all
AnnotInfo.m_Color = RGB(200, 0, 100);
2. A comprehension question: should I draw four single lines in order to create a rectangle or can I directly draw a rectangle with that function (with the parameter LPCPXC_RectF rect that specifies the bounding rectangle of the annotation)?RGB(255, 255, 255) = 16777215 ???
This is an extract from my code:
Code: Select all
var borderRect = new PdfXchangePro.PXC_RectF { left = selection.Left,
right = selection.Right,
top = selection.Top,
bottom = selection.Bottom };
int color = 16777215; // RGB(255, 255, 255) ???
var border = new PdfXchangePro.PXC_AnnotBorder { m_Width = StrToDouble(BorderThickness),
m_Type = PdfXchangePro.PXC_AnnotBorderStyle.ABS_Solid };
var borderInfo = new PdfXchangePro.PXC_CommonAnnotInfo{ m_Color = color,
m_Flags = Convert.ToInt32(PdfXchangePro.PXC_AnnotsFlags.AF_ReadOnly),
m_Opacity = _opacity,
m_Border = border };
var startPoint = new PdfXchangePro.PXC_PointF {x = selection.Left, y = selection.Top};
var endPoint = new PdfXchangePro.PXC_PointF {x = selection.Right, y = selection.Bottom};
int retval = PdfXchangePro.PXCp_AddLineAnnotationW(_handle,
0,
ref borderRect,
"xy",
"yx",
ref startPoint,
ref endPoint,
PdfXchangePro.PXC_LineAnnotsType.LAType_None,
PdfXchangePro.PXC_LineAnnotsType.LAType_None,
color,
ref borderInfo); // function returns 0
Thanks!
Site Admin
Posts:3632
Joined:Thu Jul 08, 2004 10:36 pm
Location:Vancouver Island - Canada
Contact:
Can you send me PDF generated by your code ?
P.S. RGB is 'macro' is equivalent to the following function
P.S. RGB is 'macro' is equivalent to the following function
Code: Select all
// r, g, and b in range from 0 to 255
ULONG _RGB(int r, int g, int b)
{
return (ULONG)(r + g * 256 + b * 65536);
}
Tracker Software (Project Director)
When attaching files to any message - please ensure they are archived and posted as a .ZIP, .RAR or .7z format - or they will not be posted - thanks.
When attaching files to any message - please ensure they are archived and posted as a .ZIP, .RAR or .7z format - or they will not be posted - thanks.
I've got it! I had to close the document in the PDF viewer before creating a line annotation with PDF Tools Library function PXCp_AddLineAnnotationW!
I still don't understand whether I can create a rectangle as one annotation or I have to construct it generating 4 single lines? The third parameter (LPCPXC_RectF rect) in this function is according to the documentation a bounding rectangle of the
annotation. What is the use of it?
One more question. Is it possible to suppress the pop-up annotation dialog that appears after a double-click on the annotation? Are there any parameter in the PXCp_AddLineAnnotationW function to manage it. I've only found the flag PXC_AnnotsFlags.AF_ReadOnly of the PXC_CommonAnnotInfo class, that makes the line annotation (or the annotation bounding rectangle?) read-only.
I still don't understand whether I can create a rectangle as one annotation or I have to construct it generating 4 single lines? The third parameter (LPCPXC_RectF rect) in this function is according to the documentation a bounding rectangle of the
annotation. What is the use of it?
One more question. Is it possible to suppress the pop-up annotation dialog that appears after a double-click on the annotation? Are there any parameter in the PXCp_AddLineAnnotationW function to manage it. I've only found the flag PXC_AnnotsFlags.AF_ReadOnly of the PXC_CommonAnnotInfo class, that makes the line annotation (or the annotation bounding rectangle?) read-only.
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
It is definitely possible with the low level functions, but unfortunately the only High Level ones are for adding annotations - not for deleting them.
Best,
Stefan
It is definitely possible with the low level functions, but unfortunately the only High Level ones are for adding annotations - not for deleting them.
Best,
Stefan
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hello relapse,
As mentioned before - there are no high level functions that will allow you to delete annotations.
So you will need to read on annotations in the PDF Reference:
http://wwwimages.adobe.com/www.adobe.co ... ce_1-7.pdf
section 8.4 Annotations in the above document
or section 12.4 Annotations in the ISO version of the file:
http://wwwimages.adobe.com/www.adobe.co ... 0_2008.pdf
And then utilize the low level functions described in
Alternatively - you could use JS while you have the files opened in the Viewer AX. This should be quite a lot easier to implement, and will still allow you to create/edit/delete annotations as needed.
Best,
Stefan
Tracker
As mentioned before - there are no high level functions that will allow you to delete annotations.
So you will need to read on annotations in the PDF Reference:
http://wwwimages.adobe.com/www.adobe.co ... ce_1-7.pdf
section 8.4 Annotations in the above document
or section 12.4 Annotations in the ISO version of the file:
http://wwwimages.adobe.com/www.adobe.co ... 0_2008.pdf
And then utilize the low level functions described in
3.2.5 PDF Dictionary Functionsof our PDF Tools SDK manual to read and manipulate the annotations dictionary as neeed.
Alternatively - you could use JS while you have the files opened in the Viewer AX. This should be quite a lot easier to implement, and will still allow you to create/edit/delete annotations as needed.
Best,
Stefan
Tracker
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
There are some snippets inside the manual, but there isn't anything more complex - as those are low level functions giving you access to the very structure of the PDF File and the way you would like to use such methods will greatly vary from case to case. You will need to get yourself acquainted with the PDF specification to be able to use those successfully.
Best,
Stefan
There are some snippets inside the manual, but there isn't anything more complex - as those are low level functions giving you access to the very structure of the PDF File and the way you would like to use such methods will greatly vary from case to case. You will need to get yourself acquainted with the PDF specification to be able to use those successfully.
Best,
Stefan
I do read the pdf specification
I cannot understand how it is possible to access the Annotations-dictionary of a ceratin page.
I've found the function PXCp_ObjectGetDictionary, but it needs an object handle. Where can I get it?
I cannot understand how it is possible to access the Annotations-dictionary of a ceratin page.
I've found the function PXCp_ObjectGetDictionary, but it needs an object handle. Where can I get it?
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi relapse,
You might want to use functions like PXCp_llGetObjectByIndex to obtain an object first, and then here is the sample from the manual for using the PXCp_ObjectGetDictionary function:Best,
Stefan
You might want to use functions like PXCp_llGetObjectByIndex to obtain an object first, and then here is the sample from the manual for using the PXCp_ObjectGetDictionary function:
Code: Select all
// Retrieve object's dictionary
HPDFOBJECT hObject;
...
HPDFDICTIONARY hDict;
hr = PXCp_ObjectGetDictionary(hObject, &hDict);
if (IS_DS_FAILED(hr))
{
// report error
...
}
Stefan
I try to use the PXC_Rect function in order to draw a real rectangle and not an annotation.
What is this identifier for the page content and how can I get it?
Thanks!
HRESULT PXC_Rect(
_PXCContent* content,
double left,
double top,
double right,
double bottom
);
Parameters
content [in] Parameter content specifies the identifier for the page content to which the function will be applied.
_PXCContent* content,
double left,
double top,
double right,
double bottom
);
Parameters
content [in] Parameter content specifies the identifier for the page content to which the function will be applied.
What is this identifier for the page content and how can I get it?
Thanks!
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
This method is from the PXCLIB40 set of functions - those are aimed at creating new PDF document from scratch - so you can not use that to just add a rectangle to an already existing page I am afraid.
Otherwise - you can see how the content identifier is to be set up in the sample projects in
C:\Program Files\Tracker Software\PDF-XChange PRO 4 SDK\Examples\SDKExamples\<<YOUR Programming language>>\PDFXCDemo
Best,
Stefan
This method is from the PXCLIB40 set of functions - those are aimed at creating new PDF document from scratch - so you can not use that to just add a rectangle to an already existing page I am afraid.
Otherwise - you can see how the content identifier is to be set up in the sample projects in
C:\Program Files\Tracker Software\PDF-XChange PRO 4 SDK\Examples\SDKExamples\<<YOUR Programming language>>\PDFXCDemo
Best,
Stefan
Thanks, Stefan, your patience is honorable.
Is there any difference between
HRESULT PXCp_Init(PDFDocument* pObject, LPCSTR Key, LPCSTR DevCode);
and
HRESULT PXC_NewDocument(_PXCDocument** pdf, LPCSTR key, LPCSTR devCode);
? Are the both parameters
I've tried to mix the use of the both libraries:but I've got an AccessViolationException executing the PXC_GetPage function.
I've also found no function to delete a new created (with PXC_Rect) graphical object or is it not possible at all?
Is there any difference between
HRESULT PXCp_Init(PDFDocument* pObject, LPCSTR Key, LPCSTR DevCode);
and
HRESULT PXC_NewDocument(_PXCDocument** pdf, LPCSTR key, LPCSTR devCode);
? Are the both parameters
andPDFDocument* pObjectidentical?_PXCDocument** pdf
I've tried to mix the use of the both libraries:
Code: Select all
int pageContentIdentifier;
int pdfHandle;
int pdfPage = 0;
PdfXchangePro.PXCp_Init(out pdfHandle, PdfXchangePro.SerialNumber, PdfXchangePro.DevelopmentCode);
PdfXchangePro.PXCp_ReadDocumentW(pdfHandle, _tempFile, 0);
PdfXchange.PXC_GetPage(pdfHandle, pdfPage, out pageContentIdentifier);
PdfXchange.PXC_Rect(pdfHandle, 20, 100, 100, 20);
I've also found no function to delete a new created (with PXC_Rect) graphical object or is it not possible at all?
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
I am afraid you can't mix methods from the two libraries. You will need to create and save a PDF file using the PXC_ methods, and then open it and modify it using the PCXp_ones.
As PCX_ methods are designed for building up PDF files - there are no delete methods - as you are creating a pdf file or page starting from an empty one and only adding the components you want.
Best,
Stefan
I am afraid you can't mix methods from the two libraries. You will need to create and save a PDF file using the PXC_ methods, and then open it and modify it using the PCXp_ones.
As PCX_ methods are designed for building up PDF files - there are no delete methods - as you are creating a pdf file or page starting from an empty one and only adding the components you want.
Best,
Stefan
Yesterday I managed to draw a rectangle I needed but the restriction is - it must be a new pdf document. It's a pity!
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hello Relapse,
Glad to hear that you got it working. And great to hear there are no width limitations with the PXCp_AddLineAnnotationW method.
As for samples for handling dictionaries - I am afraid that I can't help - any any samples would probably be in the PDF Specification itself.
Best,
Stefan
Glad to hear that you got it working. And great to hear there are no width limitations with the PXCp_AddLineAnnotationW method.
As for samples for handling dictionaries - I am afraid that I can't help - any any samples would probably be in the PDF Specification itself.
Best,
Stefan
The best advice here is to look at the C# wrappers for other projects. It is important to use the proper marshalling for types like BSTR and LPWSTR (from C# "string" types). If you look at function declarations for DLL imports in C# you'll often see a function argument prefixed by something like:relapse wrote:Yesterday I managed to draw a rectangle I needed but the restriction is - it must be a new pdf document. It's a pity!
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Code: Select all
[MarshalAs(UnmanagedType.LPWStr)]
Code: Select all
sometype somefunction([MarshalAs(UnmanagedType.LPWStr)] string InputLPWSTR);
UnmanagedType has a lot of members (LPWStr, BStr, etc) that you can specify for different scenarios. Check MSDN for details or use autocomplete in Visual Studio to see a list.
Also note the use of "ref" and "out" keywords that are used when the API function takes a pointer. "ref" means C# will check to see if the value is initialized; "out" means it may be uninitialized and is expected to be set by the function.
Code: Select all
E.g. C++:
HRESULT calculate_property_of_mystruct(mystruct* input, int* output);
would be imported into C# with:
... calculate_property_of_mystruct(ref mystruct input, out int output);
Lots of reading here:
http://msdn.microsoft.com/en-us/library/26thfadc.aspx
http://msdn.microsoft.com/en-us/library/fzhhdwae.aspx
|
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] mod_wsgi (pid=579): Exception occurred processing WSGI script '/opt/repo/ROOT/application'.
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] Traceback (most recent call last):
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 187, in __call__
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] response = self.get_response(request)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/core/handlers/base.py", line 199, in get_response
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/core/handlers/base.py", line 236, in handle_uncaught_exception
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] return debug.technical_500_response(request, *exc_info)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/views/debug.py", line 91, in technical_500_response
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] html = reporter.get_traceback_html()
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/views/debug.py", line 350, in get_traceback_html
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] return t.render(c)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/template/base.py", line 148, in render
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] return self._render(context)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/template/base.py", line 142, in _render
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] return self.nodelist.render(context)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/template/base.py", line 844, in render
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] bit = self.render_node(node, context)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/template/debug.py", line 80, in render_node
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] return node.render(context)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/template/debug.py", line 90, in render
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] output = self.filter_expression.resolve(context)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/template/base.py", line 624, in resolve
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] new_obj = func(obj, *arg_vals)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/template/defaultfilters.py", line 769, in date
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] return format(value, arg)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/utils/dateformat.py", line 343, in format
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] return df.format(format_string)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/utils/dateformat.py", line 35, in format
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] pieces.append(force_text(getattr(self, piece)()))
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/utils/dateformat.py", line 268, in r
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] return self.format('D, j M Y H:i:s O')
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/utils/dateformat.py", line 35, in format
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] pieces.append(force_text(getattr(self, piece)()))
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/utils/encoding.py", line 85, in force_text
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] s = six.text_type(s)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/utils/functional.py", line 144, in __text_cast
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] return func(*self.__args, **self.__kw)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/utils/translation/__init__.py", line 83, in ugettext
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] return _trans.ugettext(message)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 325, in ugettext
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] return do_translate(message, 'ugettext')
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 306, in do_translate
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] _default = translation(settings.LANGUAGE_CODE)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 209, in translation
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] default_translation = _fetch(settings.LANGUAGE_CODE)
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] File "/opt/repo/virtenv/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 189, in _fetch
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] "The translation infrastructure cannot be initialized before the "
[Sat Dec 20 16:43:15 2014] [error] [client 178.141.172.27] AppRegistryNotReady: The translation infrastructure cannot be initialized before the apps registry is ready. Check that you don't make non-lazy gettext calls at import time.
[program:yoursite]
command=uwsgi --ini /etc/uwsgi.ini
autostart=true
autorestart=true
stderr_logfile = /tmp/uwsgi-err.log
stdout_logfile = /tmp/uwsgi.log
[uwsgi]
chdir = /home/projectroot
wsgi-file = /home/projectroot/wsgi.py
home = /home/projectroot/.env
logto = /var/log/uwsgi.log
master = true
processes = 10
socket = /tmp/yoursite.sock
vacuum = true
touch-reload = /tmp/yoursite.reload
server {
listen 80;
server_name yoursite.com;
access_log /home/var/log/nginx/yoursite.nginx.access.log;
error_log /home/var/log/nginx/yoursite.nginx.error.log;
location / {
uwsgi_pass unix:///tmp/yoursite.sock;
include uwsgi_params;
}
location /static/ {
alias /home/yoursite/assets/;
}
}
|
Data Science for Software Engieering (ds4se) is an academic initiative to perform exploratory analysis on software engineering artifacts and metadata. Data Management, Analysis, and Benchmarking for DL and Traceability.
Nathan Cooper 9224623e41 merge from plots 5 days ago
.dvc 2 months ago
.github 11 months ago
blogs 1 week ago
docs 1 month ago
ds4se 1 month ago
dvc-ds4se 3 weeks ago
nbs 5 days ago
notebooks 3 months ago
.dvcignore 3 months ago
.gitignore 3 months ago
.pypirc 3 months ago
CONTRIBUTING.md 11 months ago
DS4SE.png 2 months ago
Dockerfile 11 months ago
LICENSE 11 months ago
Makefile 11 months ago
README.md 2 months ago
index.ipynb 2 months ago
requirements.txt 1 month ago
settings.ini 1 month ago
setup.py 2 months ago
start.sh 3 months ago
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File
Data Science for Software Engieering (ds4se) is an academic initiative to perform exploratory analysis on software engineering artifacts and metadata. Data Management, Analysis, and Benchmarking for DL and Traceability.
This documentation composed of 4 parts:
1)architecture
2)deployment
3)installation
4)usage
Below is the Architecture diagram of the DS4Se library.
Users of the DS4SE API will pass in either strings or pandas dataframes that consist of content of either source or target artifacts as input, and get different analytical results of the input depending on the function user called.
The DS4SE library is divided into two parts: Traceability and Analysis, corresponind to different usages of the API.
The traceability part consists only one method: TraceLinkValue(), that will process strings with user-specified technique. This method intends to support 6 different techniques:
VSM
orthogonal
JS
LDA
LSA
word2vec
doc2vec
Currently only word2vec and doc2vec are inplementated. The implementations are in notebook 3.4_facade.ipynb and corresponind generated facade.py file. Actual implementation of word2vec and doc2vec are in the notebook 3.2_mining.unsupervised.eval.ipynb and corresponding nbdev generated eval.py. As the diagram shows, 3.4_facade.ipynb imports eval.py for instantiation of either word2vec or doc2vec class object. Those object then load "*.model" file and start calculation.
To further add implementations for other techniques, programmers should modify notebook 3.4_facade.ipynb.
The analysis part of the API consists 9 methods:
NumDoc()
VocabShared()
VocabSize()
Vocab()
SharedVocabSize()
AverageToken()
CrossEntropy()
KLDivergence()
MutualInformation()
Currently only KLDivergence() and MutualInformation() are NOT inplementated. The implementations are in notebook 3.4_facade.ipynb and corresponind generated facade.py file.
All methods in this section takes in pandas dataframe(s) as input.
NumDoc() method are simple enough to stand on its own, it just count the number of rows in dataframes.
Functions VocabShared(), VocabSize(), Vocab(), SharedVocabSize(), AverageToken() only need a sentencepiece processor bpe model to function. All these methods instantiate a processor by a "*.model" file and receive a Counter object, in which results are stored.
Actual implementation of CrossEntropy() is in notebook 1.0_exp.i.ipynb as dit_shannon(). CrossEntropy() simply combines two user-provided dataframes and process it through sentencepiece processor and calls this method with resulting Counter object.
To further add implementations for KLDivergence() and MutualInformation(), notebook 3.4_facade.ipynb should be modified.
The API is deployed to pypi at https://pypi.org/project/ds4se/.
To deploy future version of the API, follow steps listed below:
1) open setting.ini and increment the version number.
2) open terminal and run following commmad to package the library:
python3 setup.py sdist bdist_wheel
3) run following command to upload the package:>twine upload dist/*
4) when promoted for username, type in username:
ds4se
5) when promoted for password, type in username:>ds4seCS435
Note: you might need to run the following commands to make sure you have the latest version of setuptools, wheel and twine:
python3 -m pip install --user --upgrade setuptools wheel
python3 -m pip install --user --upgrade twine
To include non-".py" file in the package, modified package_data variable in setup.py. For example, if you want to include "hello.model" and "world.csv" in the package, package_data should be:>package_data={'': ['hello.model','world.csv']},
To install the API, run following commmand:
pip install ds4se
If you need to upgrade DS4SE, run:> pip install DS4SE --upgrade The library requires several other libraries, to install/upgrade them, run these command:
pip install --upgrade gensim
pip install nbdev
pip install sentencepiece
pip install dit
After installing/upgrading above libraries, DS4SE is ready to use!
After installing the API, import ds4se.facade to use its functionalities.
import ds4se.facade as facade
To use the ds4se library to calculate trace link value of proposed trace link with given.The function will takes in two strings for contents for source file and target file, feed two strings into a model that user specifies, and return traceability value.
Supported technique model: VSM LDA orthogonal LSA JS word2vec doc2vec
The function returns a tuple of two integers, with the first element as distance between two artifacts and the second element be the similarity between two artifacts, which is the traceability value.
facade.TraceLinkValue("source_string is a string of entire content of one source file","target_string is a string of entire content of one targetfile","word2vec")
2020-11-01 22:55:01,937 : INFO : adding document #0 to Dictionary(0 unique tokens: [])
2020-11-01 22:55:01,947 : INFO : built Dictionary(1815 unique tokens: ['@return', 'Converts', 'The', 'a', 'and']...) from 153 documents (total 5769 corpus positions)
2020-11-01 22:55:01,949 : INFO : loading Word2Vec object from c:\users\admin\desktop\fall2020\software engineering\project\github desktop\ds4se\ds4se\model\word2vec_libest.model
2020-11-01 22:55:01,997 : INFO : loading wv recursively from c:\users\admin\desktop\fall2020\software engineering\project\github desktop\ds4se\ds4se\model\word2vec_libest.model.wv.* with mmap=None
2020-11-01 22:55:01,998 : INFO : setting ignored attribute vectors_norm to None
2020-11-01 22:55:01,999 : INFO : loading vocabulary recursively from c:\users\admin\desktop\fall2020\software engineering\project\github desktop\ds4se\ds4se\model\word2vec_libest.model.vocabulary.* with mmap=None
2020-11-01 22:55:01,999 : INFO : loading trainables recursively from c:\users\admin\desktop\fall2020\software engineering\project\github desktop\ds4se\ds4se\model\word2vec_libest.model.trainables.* with mmap=None
2020-11-01 22:55:02,001 : INFO : setting ignored attribute cum_table to None
2020-11-01 22:55:02,002 : INFO : loaded c:\users\admin\desktop\fall2020\software engineering\project\github desktop\ds4se\ds4se\model\word2vec_libest.model
2020-11-01 22:55:02,015 : INFO : precomputing L2-norms of word weight vectors
2020-11-01 22:55:02,019 : INFO : constructing a sparse term similarity matrix using <gensim.models.keyedvectors.WordEmbeddingSimilarityIndex object at 0x000001F77D3A65B0>
2020-11-01 22:55:02,020 : INFO : iterating over columns in dictionary order
2020-11-01 22:55:02,022 : INFO : PROGRESS: at 0.06% columns (1 / 1815, 0.055096% density, 0.055096% projected density)
2020-11-01 22:55:02,167 : INFO : PROGRESS: at 55.15% columns (1001 / 1815, 0.140033% density, 0.209102% projected density)
2020-11-01 22:55:02,227 : INFO : constructed a sparse term similarity matrix with 0.173668% density
2020-11-01 22:55:02,235 : INFO : Removed 7 and 7 OOV words from document 1 and 2 (respectively).
2020-11-01 22:55:02,236 : INFO : adding document #0 to Dictionary(0 unique tokens: [])
2020-11-01 22:55:02,238 : INFO : built Dictionary(4 unique tokens: ['content', 'file', 'one', 'string']) from 2 documents (total 7 corpus positions)
2020-11-01 22:55:02,239 : INFO : Computed distances or similarities ('source', 'target')[[0.12804699828021432, 0.88648788705131]]
(0.12804699828021432, 0.88648788705131)
word2vec_metric is an optional parameter when using word2vec as technique, available metrics are:
WMD
SCM
This is the data analysis part of ds4se library, users can use the library to conduct analysis on artifacts with information theory and statistical analysis
For all functions in analysis part, input should be pandas dataframe with following structure
d = {'contents': ["hello world", "this is a content of another file"]}
df = pd.DataFrame(data=d)
print(df)
contents0 hello world1 this is a content of another file
The method can process dataframes for artifacts contents and return the number of documents each artifacts class contains.
It takes in two parameters, a pandas dataframe for source artifacts and a pandas data frame for target artifacts, and it will do calculation for both classes.
The method returns a list of 4 integers:
1: number of documents for source artifacts;
2: number of documents for target artifacts;
3: source difference (difference between previous two results);
4: target difference (same as above, but opposite sign).
result = facade.NumDoc(source_df, target_df)
source_doc = result[0]
target_doc = result[1]
difference_source = result[2]
difference_target = result[3]
print("The number of documents for source is {} , with {} source difference".format(source_doc, difference_source))
print("The number of documents for target is {} , with {} target difference".format(target_doc, difference_target))
The number of documents for source is 2 , with 0 source difference
The number of documents for target is 2 , with 0 target difference
The method can process dataframes for artifacts contents and return the total number of vocab contained in each artifact class.
The method takes in two parameters, source artifacts and target artifacts, and it will do calculation for both classes.
The method returns a list of 4 integers:
1: vocabulary size for source artifacts;
2: vocabulary size for target artifacts;
3: source difference;
4: target difference.
vocab_result = facade.VocabSize(source_df, target_df)
source = vocab_result[0]
target = vocab_result[1]
difference_source = vocab_result[2]
difference_target = vocab_result[3]
print("The vocabulary size for source is {} , with {} target difference".format(source, difference_source))
print("The vocabulary size for target is {} , with {} target difference".format(target, difference_target))
The vocabulary size for source is 10 , with 0 target difference
The vocabulary size for target is 10 , with 0 target difference
The method can process dataframes for artifacts contents and return the average number of tokens in each artifact class.
It does calculation by first finding the total number of token for each artifact class, and then divide each of them by the number of documents present in each artifacts.
The method takes in two parameters, source artifacts and target artifacts, and it will do calculation for both classes.
The method returns a list of 4 integers:
1: average number of token for source artifacts;
2: average number of token for target artifacts;
3: source difference;
4: target difference.
token_result = facade.AverageToken(source_df, target_df)
source = token_result[0]
target = token_result[1]
difference_source = vocab_result[2]
difference_target = vocab_result[3]
print("The number of average token for source is {} , with {} source difference".format(source, difference_source))
print("The number of average token for target is {} , with {} target difference".format(target, difference_target))
The number of average token for source is 107 , with 35 source difference
The number of average token for target is 143 , with -35 target difference
The method can process dataframes for artifacts contents and return the top three most frequent terms that appears in artifact class. It employs bpe model to precess the contents in each dataframe
The method takes in two parameters,
1: source artifacts,
2: target artifacts,
and it will do calculation for both classes.
The method returns a dictonary with
key: token
value: a list of count and frequency
facade.VocabShared(source_df,target_df)
{'est': [160, 0.16], 'http': [136, 0.136], 'frequnecy': [124, 0.124]}
If user only need the term frequency of one of two classes, they can choose to use Vocab() function, which is exactly the same except Vocab only processes one dataframe for one artifact class
facade.Vocab(artifacts_df)
{'est': [141, 0.141], 'http': [136, 0.136], 'frequnecy': [156, 0.156]}
Using the following metrics to compute using both source and target artifacts, use the following funtions.
For all methods below, two parameters are required: source and target artifacts, they are all in form of dataframes
They all return one integer value
Shared vocabulary size
return the totla vocab size of source and target combined
facade.SharedVocabSize(source_df, target_df)
112
Mutual information
facade.MutualInformation(source_df, target_df)
127
CrossEntropy
CrossEntropy calculates shanno entropy of combind source and target artifacts, it returns a integers.
facade.CrossEntropy(source_df, target_df)
171
KL Divergence
facade.KLDivergence(source_df, target_df)
152
|
okay, so i was a little optimistic with that last edit -
i can't finish this.
the pointers are all over the place, and there are nice filename tables. here's the bulk of it:
(the 4 segments are aligned in 4096-byte blocks - hence your "motherload of zeroes" :
HEADER (see below)
SEGMENT 1 (filenames at end)
SEGMENT 2 (filenames at end too..)
SEGMENT 3 (smallest - expected pointers))
segments are logged in the bms script
Code: Select all
endian big
## header
idstring "res\x0a"
get DUMMY short
get DUMMY short
for i = 1 to 3
if i == 1
get pSECTOR1 long
get sSECTOR1 long
log "SECTOR_1.dat" pSECTOR1 sSECTOR1
else
get pSECTOR long
get sSECTOR long
if i == 2
get pTABLE long
endif
set NAME string "SECTOR_"
string NAME += i
string NAME += ".dat"
log NAME pSECTOR sSECTOR
endif
next i
get SECTORS long
for i = 1 to SECTORS
getdstring NAME 4
# print "%NAME%"
get DUMMY short # >1, some odd, not unique
get DUMMY short # flags? 4,8,32,128
next i
## end of header
other stuff:
the filenames should be pointed to, though can be read with 4-byte padding after string (get NAME string \n padding 4)
Code: Select all
## debugging :
set SEEK pSECTOR1 ### this is why i kept the first pointer seperately ^^
math SEEK += pTABLE
goto SEEK
get FILES long
get DUMMY long # 4
savepos SEEK
math FILES *= 12 # short, short, char[4], short, short
math SEEK += FILES
goto SEEK
for
get FILENAME string
strlen FNLEN FILENAME
if FNLEN == 0
cleanexit
endif
padding 4
print "%FILENAME%"
next
the structure before them is 12 bytes -- short, short, char[4], short, short
the third segment is : get Count long, get DUMMY long (4), with a 20-byte structure of getdstring NAME 4 and 8 shorts. i was expecting this data to point to the file, but it isn't consistent..
if the segment/sector size is 0, the following sector will start at the same position.
|
隨著工作的需要,和OSX的Terminal一起工作的時間逐漸增加,在過程中隨之而來的就是感情的激盪。為了繼續增進和它之間的熟悉感,前陣子我就決定要來客製化Terminal。而之所以發在技術問答是希望能順便藉此機會和各位大神們討教一下諸位神奇的Terminal妙招。我先來分享一些提高了我工作效率的.bash_profile設定:
前情提要:
.bash_profile是在你的Terminal啟動之前會先行執行的檔案,在這個檔案中你可以設定指令的快捷鍵、文字顏色、以及指令的路徑。要打開它很簡單,只需要在terminal打上open ~/.bash_profile就可以了!
說到指令快捷鍵,我想要先提兩個:
# (1) 快速打開.bash_profile
alias edit="open ~/.bash_profile
# (2) 快速清理(只是比較潮XD)
alias cl="clear"
在.bash_profile裡面設定讓你可以更快打開.bash_profile的指令,根本計畫通。
而cl則是能讓你在迅雷不及掩耳的時間內把terminal裡面的東西全部清掉。
要測試這兩個功能,只需要在儲存好剛剛輸入進.bash_profile之後重新開啟terminal就可以打上edit和cl來測試囉!
常常覺得要跳到一個Working Directory之前花費在cd的時間都夠我多打一行程式了,為了更快速地到達目的地開始工作(沒開IDE直接用Terminal時),我通常也會把近期常用的cd放到任意門裡,方法如下:
# cd的目的地要記得改成自己要的地方喔!
alias proj="cd ~/Desktop/My_Project"
開始花很多時間Trial and Error,這時候就需要讀很多Terminal裡面噴出來的錯誤訊息。在還沒設定文字顏色之前,看著全白的文字又找不到切入點,我的臉色也變得很白。於是我決定至少要把輸入點的顏色改的明顯一點,我就先去網路上抄了一段code下來加在.bash_profile裡。
export PS1="\[\033[36m\]\u\[\033[0m\]@\[\033[42m\]\h:\[\033[33;49;1m\]\w\[\033[0m\]\$ "
結果會像這樣
為了讓大家更好理解這一串字中發生了什麼事,我來說文解字:
export PS1="
\[\033[36m\] -> 36m是cyan淺藍色
\u -> User Name 使用者名稱
\[\033[0m\] -> 0m是default
@ -> 圖片中的白色@
\[\033[42m\] -> 綠底,前面0m過所以文字會維持白色
\h -> Host Name 主機名稱
: -> 圖片中綠底的:
\[\033[33;49;1m\] -> 33黃色、49預設底色、1是粗體,中間用;分隔,最後用m\結尾
\w -> Working Directory
\[\033[0m\]\ -> 恢復預設值
$ -> 圖片中的$
"
大家可以照著一些網站的色碼來改自己喜歡的配色。
選擇太多了,沒辦法決定,那就寫個程式幫你選吧。這其實是我朋友的朋友的idea XD
先在桌面上寫個Python Code,依照往例,我們用Terminal來做這件事:
touch ~/Desktop/eat.py
open ~/Desktop/eat.py -> 我其實prefercode ~/Desktop/eat.py,但要先安裝VSCode XD
import random
food_list = ["Maccas", "KFC", "HJ's", "Pho", "Sushi", "Laksa", "Subway", "Curry", "Don", "Pizza"]
accept = 'n'
while (accept == 'n'):
question = "Is " + random.choice(food_list) + " a good choice?(Y/n)"
accept = input(question).lower()
print("Good.")
先用Terminal跑一次測試看看python eat.py
沒問題的話,我們就丟進.bash_profile吧!
alias eat="python ~/Desktop/eat.py"
最後,關掉Terminal重開,輸入eat測試看看吧!
來提問結果好像自己說太多,其實想聽大家聊聊自己怎麼把朝夕相處的Terminal變成神奇的魔法!
|
Introduction
Apart from understanding these roles and respective responsibilities, more important questions to pose are: How can three different personas, three different experiences, and three different requirements collaborate and combine their efforts? Or can they employ a unified platform rather than resort to one-off bespoke solutions?
Yes, they can collaborate and use a single platform. Last month, we announced our Unified Databricks Platform. Aimed to facilitate collaboration among data engineers, data scientists, and data analysts, two of its software artifacts—Databricks Workspace and Notebook Workflows—achieve this coveted collaboration.
In this blog, we will explore how each persona can
Employ Notebook Workflows to collaborate and construct complex data pipelines with Apache Spark
Orchestrate independent and idempotent notebooks as a single unit of execution
Eliminate the need for bespoke one-off or distinct solutions.
Amazon Public Product Ratings
First, let’s look at the data scenario. Consider our data scenario as a corpus of Amazon public product ratings, where each persona expects data in a digestible format to perform respective tasks.
A corpus of product reviews with different data artifacts, this dataset is of interest to any data scientist or data analyst. For example, a data analyst may want to explore data to examine what kinds of ratings, product categories or brands exist. By contrast, a data scientist may want to train a machine learning model to predict favorable ratings with certain keywords—such as “great” or “return” or “horrible”—in the user reviews on a periodic basis.
But neither exploration (by a data analyst) nor training the model (by a data scientist) is possible without first transforming data into a digestible format for each of the personas. And that’s where a data engineer comes into the equation: She’s responsible to transform raw data into consumable data, by creating a data pipeline. (We refer to a ExamplesIngestingData notebook how a data engineer may ingest public data set into Databricks.)
Next, we will examine our first data pipeline, the first notebook TrainModel, and walk through the tasks pertaining to each persona.
Data Pipeline of Apache Spark Jobs
Exploring Data
For brevity we won’t go into the Python code that transformed raw data into JSON files for ingestion—that code is on this page. Instead, we will focus on our data pipeline notebook, TrainModel, that aids the data scientist and data analyst to collaborate.
Once our data engineer has ingested the corpus of product reviews into Parquet files, created an external Amazon table with parquet files, created a temporary view from that external table to explore portions of the table, both a data analyst and data scientist can work cooperatively within this TrainModel notebook.
Rather than express computation in Python code, a language a data engineer or data scientist is more intimate with, a data analyst can express SQL queries. The point here is that the type of notebook—whether Scala, Python, R or SQL—is less important than the ability to express query in a familiar language (i.e., SQL) and to collaborate with others.
Now that we have digestible data for each persona, as a temporary table tmp_amazon, a data analyst can ask business questions and visualize data; she can query this table, for example, with the following questions:
What does the data look like?
How many different brands?
How do the brands fair in ratings?
Satisfied with her preliminary analyses, she may turn to a data scientist who can devise a machine learning model that enables them to periodically predict ratings of user reviews. As users buy and rate products on the Amazon website, on daily or weekly basis, a machine learning model can be retrained with new data on regular basis in production.
Training the Machine Learning Model
Apache Spark’s Machine Learning Library MLlib contains many algorithms for classification, regression, clustering and collaborative filtering. At a high level, the spark.ml package provides tools, techniques, and APIs for featurization, pipelining, mathematical utilities, and persistence.
When it comes to binary predictions with outcomes of good (1) or bad (0) based on certain keywords, the best model suited for this classification is Logistic Regression Model, a special case of Generalized Linear Models that predict the probability of favorable outcomes.
In our case, we want to predict outcomes of ratings for reviews with some favorable keywords. Not only we will employ the binomial logistic regression of the family of logistic regression models offered by MLlib but use spark.ml pipelines and its Transformers and Estimators.
Create Machine Learning Pipeline
This snippet of Python code shows how to create the pipeline with transformers and estimators.
from pyspark.ml import *
from pyspark.ml.feature import *
from pyspark.ml.feature import Bucketizer
from pyspark.ml.classification import *
from pyspark.ml.tuning import *
from pyspark.ml.evaluation import *
from pyspark.ml.regression import *
#
# Bucketizer transforms a column of continuous features to a column of feature buckets, where the buckets are specified by users.
# It takes the common parameters inputCol and outputCol, as well as the splits for bucketization.
# Feature values greater than the threshold are bucketized to 1.0; values equal to or less than the threshold
# are binarized to 0.0. Both Vector and Double types are supported for inputCol.
# We will use rating as our input, and the output label will have value of 1 if > 4.5
#
# For this model we will use two feature transformer extractors: Bucketizer and Tokenizer
#
splits = [-float("inf"), 4.5, float("inf")]
tok = Tokenizer(inputCol = "review", outputCol = "words")
bucket = Bucketizer(splits=splits, inputCol = "rating", outputCol = "label")
#
# use HashingTF feature extractor, with its input as "words"
#
hashTF = HashingTF(inputCol = tok.getOutputCol(), numFeatures = 10000, outputCol = "features")
#
# create a model instance with some parameters
#
lr = LogisticRegression(maxIter = 10, regParam = 0.0001, elasticNetParam = 1.0)
#
# Create the stages pipeline with all the feature transformers to create an Estimator
#
pipeline = Pipeline(stages = [bucket, tok, hashTF, lr])
Create Training and Test Data
Next, we use our training data to fit the model and finally evaluate with our test data. The transformed DataFrames predictions should have our predictions and labels.
# Create our model estimator
#
model = pipeline.fit(trainingData)
#score the model with test data
predictions = model.transform(testData)
#convert dataframe into a table so we can easily query it using SQL
predictions.createOrReplaceTempView('tmp_predictions')
As you may notice from the above query on our predictions DataFrame saved as a temporary table that occurrences of the word return in the reviews in our test data result in value 0 for both prediction and label and low ratings as expected.
Satisfied with the results from evaluating the model, a data scientist can persist the model for either sharing with other data scientists for further evaluation or sharing with data engineer to deploy in production.
That is accomplished by persisting the model.
Persisting the Model
Consider use cases and scenarios where a data scientist produces an ML model and wants to test and iterate over it, deploy it into production for real-time prediction serving or share it with another data scientist to validate. How to do you do it?
Persisting and serializing the ML pipeline is one way to export MLlib models. Another way is to use Databricks dbml-local library, which is the preferred way for real-time serving with very low latency requirements. An important caveat: For low-latency requirements when serving the model, we advise and advocate using dbml-local. Yet for this example, because latency is not an issue or a requirement with periodic product reviews, we are using the MLlib pipeline API for exporting and importing the models.
Although dbml-local is our preferred way to export and import model, both mechanisms of persistence are important for many reasons. First, it is easy and language independent—the model is exported as JSON. Second, it can be exported from one notebook, written in Python, and imported (loaded) into another notebook, written in Scala—persisting and serializing a ML pipeline and the exchange format are language independent. Third, serializing and persisting the pipeline encapsulates all featurization, not just the model. And finally, if you wish to serve your model in real-time prediction with Structured Streaming.
model.write().overwrite().save("/mnt/jules/amazon-model")
In the next section, we discuss our second pipeline, CreateStream.
Creating Streams
Consider this scenario: We have access to a live stream of product reviews and, using our trained model, we want to score against our model. A data engineer can offer this real-time data in two ways: one through Kafka or Kinesis as users rate the product on Amazon website; another through the new entries inserted into the table, which were not part of the training set, convert them into JSON files on S3. Indeed, that will just work, because Structured Streaming API reads data in the same manner whether your data sources are Blobs, files in S3, or streams from Kinesis or Kafka. We elected S3 over distributed queue for low cost and low latency.
In our case, a data engineer can simply extract the most recent entries from our table, built atop Parquet files. This short pipeline consists of three Spark jobs:
Query new product data from the Amazon table
Convert the resulting DataFrame
Store our DataFrames as JSON Files on S3
To simulate streams, we can treat each file as a collection of rows of JSON data as streaming data to score our model. This is not an uncommon case where a data scientist has trained a model and a data engineer is tasked to provide a way to get to the stream of live data persisted someplace where she can easily read and evaluate against the trained model.
To see how this is implemented, read the CreateStream notebook; its output serves JSON files as streams of Amazon reviews to the ServeModel notebook—to score against our persisted model. This leads to our final pipeline.
Serving, Importing and Scoring a Model
Consider the final scenario: We now have access to a live stream (or near a live stream) of new product reviews, and we have access to our trained model, which is persisted on our S3 bucket. A data scientist can then employ both these assets.
Let’s see how. In our case, a data scientist can simply create short pipeline of four Spark jobs:
Load the model from data store
Read the JSON files as DataFrame input stream
Transform the model with input stream
Query the prediction
```scala
// load the model from S3 path
import org.apache.spark.ml.PipelineModel
val model = PipelineModel.load(model_path)
import org.apache.spark.sql.types._
// define a the JSON schema for our stream of JSON files
val streamSchema = new StructType()
.add(StructField("rating",DoubleType,true))
.add(StructField("review",StringType,true))
.add(StructField("time",LongType,true))
.add(StructField("title",StringType,true))
.add(StructField("user",StringType,true))
//read streams
spark.conf.set("spark.sql.shuffle.partitions", "4")
val inputStream = spark
.readStream
.schema(streamSchema)
.option("maxFilesPerTrigger", 1)
.json(stream_path)
// transform with the new data in the stream
val scoredStream = model.transform(inputStream)
// and use the stream query for predictions
val queryStream = scoredStream.writeStream
.format("memory")
.queryName("streamPrediction")
.start()
// query the transformed DataFrame with new predictions
Since all the featurization is encapsulated in the persisted model, all we need is to load this serialized model as is from the disk and use it to serve and score our new data. Moreover, note that we created this model in the notebook TrainModel, which is written in Python, and we loaded inside a Scala notebook. This shows that regardless of language each persona is using to create notebooks, they can share persisted models in languages supported in Apache Spark.
Databricks Notebook Workflow Orchestration
Central to collaboration and coordination are Notebook Workflows’ APIs. With these APIs, a data engineer can string together all the aforementioned pipelines as a single unit of execution.
One way to achieve this is to share inputs and outputs among notebooks in the chain. That is, notebook’s output and exit status serve as input to the next notebook in the flow. Notebook Widgets allows parameterizing input to notebooks, whereas notebook’s exit status can pass arguments to the next one in the flow.
In our example, RunNotebooks invokes each notebook in the flow, with parameterized arguments. It will orchestrate three other notebooks, each executing its own data pipeline, creating its own Spark jobs within, and finally emitting a JSON document as its exit status. This JSON document then serves as an input parameter to the subsequent notebook in the pipeline.
# do the usual import packages
import json
import sys
#
#Run the notebook and get the path to the table
#fetch the return value from the callee 001_TrainModel
returned_json = json.loads(dbutils.notebook.run("001_TrainModel", 3600, {}))
if returned_json['status'] == 'OK':
model_path = returned_json['model_path']
try:
#Create a Stream from the table
#Fetch the return value from the callee 002_CreateStream
returned_json = json.loads(dbutils.notebook.run("002_CreateStream", 3600, {}))
if returned_json ['status'] == 'OK':
stream_path = returned_json['stream_path']
map = {"model_path": model_path, "stream_path": stream_path }
#fetch the return value from the callee 003_ServeModelToStructuredStream
result = dbutils.notebook.run("003_ServeModelToStreaming", 7200, map)
print result
else:
raise "Notebook to create stream failed!"
except:
print("Unexpected error:", sys.exc_info()[0])
raise
else:
print "Something went wrong " + returned_json['message']
Finally, not only you can run this particular notebook as an ephemeral job, but you can schedule the flow using the Job Scheduler.
What’s Next
RunNotebooks, created by a data engineer
TrainModel, created by a data engineer, data analyst, and data scientist
CreateStream, created by a data engineer
ServeModel, created by a data scientist and data engineer
ExamplesIngestingData, a sample notebook for data engineer
In summary, we demonstrated that big data practitioners can work together in Databricks’ Unified Analytics Platform to create notebooks, explore data, train models, export models, and evaluate their trained model against new real-time data. Together, they become productive when complex data pipelines, when myriad notebooks, built by different personas, can be executed as a single and sequential unit of execution. Through Notebook Workflows APIs, we demonstrated a unified experience, not bespoke one-off solutions. All that promises benefits.
Read More
To understand Notebook Workflows and Widgets and Notebooks integration in Github, read the following:
Notebook Workflows: The Easiest Way to Implement Apache Spark Pipelines
Notebook Workflows
Notebook Widgets
Notebook Github Integration
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.